Test Report: Docker_Linux_crio_arm64 21835

                    
                      73e6d6839bae6cdde957e116826ac4e2fc7d714a:2025-11-01:42153
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.31
35 TestAddons/parallel/Registry 14.84
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 145.27
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 6.35
41 TestAddons/parallel/CSI 35.37
42 TestAddons/parallel/Headlamp 4.1
43 TestAddons/parallel/CloudSpanner 6.41
44 TestAddons/parallel/LocalPath 9.39
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.48
116 TestFunctional/parallel/ServiceCmd/DeployApp 600.78
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
150 TestFunctional/parallel/ServiceCmd/Format 0.39
151 TestFunctional/parallel/ServiceCmd/URL 0.38
191 TestJSONOutput/pause/Command 2.53
197 TestJSONOutput/unpause/Command 1.76
250 TestScheduledStopUnix 37.78
292 TestPause/serial/Pause 8.45
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.45
303 TestStartStop/group/old-k8s-version/serial/Pause 6.66
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.33
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.89
321 TestStartStop/group/no-preload/serial/Pause 6.64
327 TestStartStop/group/embed-certs/serial/Pause 7
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.24
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.81
343 TestStartStop/group/newest-cni/serial/Pause 7.61
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.89
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable volcano --alsologtostderr -v=1: exit status 11 (308.582599ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:35.076702 2322765 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:35.078198 2322765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:35.078222 2322765 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:35.078229 2322765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:35.078514 2322765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:32:35.078832 2322765 mustload.go:66] Loading cluster: addons-377223
	I1101 08:32:35.079237 2322765 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:35.079276 2322765 addons.go:607] checking whether the cluster is paused
	I1101 08:32:35.079400 2322765 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:35.079425 2322765 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:32:35.079894 2322765 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:32:35.099957 2322765 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:35.100029 2322765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:32:35.118405 2322765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:32:35.222612 2322765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:35.222711 2322765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:35.256129 2322765 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:32:35.256150 2322765 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:32:35.256154 2322765 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:32:35.256160 2322765 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:32:35.256168 2322765 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:32:35.256172 2322765 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:32:35.256175 2322765 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:32:35.256178 2322765 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:32:35.256181 2322765 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:32:35.256187 2322765 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:32:35.256190 2322765 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:32:35.256193 2322765 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:32:35.256196 2322765 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:32:35.256199 2322765 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:32:35.256202 2322765 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:32:35.256206 2322765 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:32:35.256209 2322765 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:32:35.256213 2322765 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:32:35.256216 2322765 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:32:35.256219 2322765 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:32:35.256224 2322765 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:32:35.256227 2322765 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:32:35.256230 2322765 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:32:35.256232 2322765 cri.go:89] found id: ""
	I1101 08:32:35.256282 2322765 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:35.271421 2322765 out.go:203] 
	W1101 08:32:35.275036 2322765 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:35.275064 2322765 out.go:285] * 
	* 
	W1101 08:32:35.286844 2322765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:35.290553 2322765 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.718372ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003124588s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003167236s
addons_test.go:392: (dbg) Run:  kubectl --context addons-377223 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-377223 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-377223 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.127286749s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 ip
2025/11/01 08:33:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable registry --alsologtostderr -v=1: exit status 11 (408.267371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:00.240485 2323282 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:00.247791 2323282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:00.247931 2323282 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:00.247959 2323282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:00.248388 2323282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:00.248861 2323282 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:00.249319 2323282 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:00.249378 2323282 addons.go:607] checking whether the cluster is paused
	I1101 08:33:00.249539 2323282 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:00.249578 2323282 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:00.250156 2323282 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:00.281266 2323282 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:00.281336 2323282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:00.317700 2323282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:00.451330 2323282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:00.451426 2323282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:00.486433 2323282 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:00.486509 2323282 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:00.486529 2323282 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:00.486540 2323282 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:00.486544 2323282 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:00.486547 2323282 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:00.486550 2323282 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:00.486553 2323282 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:00.486556 2323282 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:00.486563 2323282 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:00.486567 2323282 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:00.486582 2323282 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:00.486592 2323282 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:00.486595 2323282 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:00.486598 2323282 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:00.486607 2323282 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:00.486625 2323282 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:00.486635 2323282 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:00.486639 2323282 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:00.486642 2323282 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:00.486649 2323282 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:00.486659 2323282 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:00.486662 2323282 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:00.486667 2323282 cri.go:89] found id: ""
	I1101 08:33:00.486737 2323282 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:00.505000 2323282 out.go:203] 
	W1101 08:33:00.508163 2323282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:00.508199 2323282 out.go:285] * 
	* 
	W1101 08:33:00.521028 2323282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:00.523967 2323282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.84s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.708838ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377223
addons_test.go:332: (dbg) Run:  kubectl --context addons-377223 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (271.160116ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:42.588955 2325237 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:42.590355 2325237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:42.590395 2325237 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:42.590415 2325237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:42.590690 2325237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:42.591001 2325237 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:42.591405 2325237 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:42.591457 2325237 addons.go:607] checking whether the cluster is paused
	I1101 08:33:42.591586 2325237 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:42.591630 2325237 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:42.592173 2325237 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:42.611557 2325237 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:42.611632 2325237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:42.629567 2325237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:42.738284 2325237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:42.738364 2325237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:42.767436 2325237 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:42.767457 2325237 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:42.767462 2325237 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:42.767466 2325237 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:42.767469 2325237 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:42.767473 2325237 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:42.767477 2325237 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:42.767480 2325237 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:42.767483 2325237 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:42.767489 2325237 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:42.767493 2325237 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:42.767495 2325237 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:42.767503 2325237 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:42.767506 2325237 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:42.767510 2325237 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:42.767516 2325237 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:42.767519 2325237 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:42.767522 2325237 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:42.767525 2325237 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:42.767528 2325237 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:42.767532 2325237 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:42.767535 2325237 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:42.767538 2325237 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:42.767541 2325237 cri.go:89] found id: ""
	I1101 08:33:42.767592 2325237 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:42.782176 2325237 out.go:203] 
	W1101 08:33:42.785073 2325237 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:42.785097 2325237 out.go:285] * 
	* 
	W1101 08:33:42.796541 2325237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:42.799552 2325237 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-377223 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-377223 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-377223 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f88d667d-b569-40dc-a66c-9942516357a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f88d667d-b569-40dc-a66c-9942516357a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002913746s
I1101 08:33:31.966354 2315982 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.451610031s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-377223 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-377223
helpers_test.go:243: (dbg) docker inspect addons-377223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c",
	        "Created": "2025-11-01T08:30:01.345784179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2317129,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:30:01.425079886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/hosts",
	        "LogPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c-json.log",
	        "Name": "/addons-377223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-377223:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c",
	                "LowerDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-377223",
	                "Source": "/var/lib/docker/volumes/addons-377223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377223",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377223",
	                "name.minikube.sigs.k8s.io": "addons-377223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d458471456f387032f6e83ec4e978b2230ee0d641d45ecd31b07e88643dee31e",
	            "SandboxKey": "/var/run/docker/netns/d458471456f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:94:0b:1f:b5:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "936d16469801a3951dccf33a5a4c1dd7e8742e643175eea2b5578e8fdc28e87b",
	                    "EndpointID": "b4945a2467466221b3ab51efdaf28cf4eb7a0f66dfc7c73a7bcf086a9645db0c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377223",
	                        "6884fdaa9d12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377223 -n addons-377223
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-377223 logs -n 25: (1.601414066s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-849797                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-849797 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-203275 --alsologtostderr --binary-mirror http://127.0.0.1:39087 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-203275   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-203275                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-203275   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-377223                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-377223                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-377223 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-377223 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-377223 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-377223 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ ssh     │ addons-377223 ssh cat /opt/local-path-provisioner/pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-377223 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ enable headlamp -p addons-377223 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ ssh     │ addons-377223 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377223                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-377223 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ ip      │ addons-377223 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:35 UTC │ 01 Nov 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:36.109928 2316740 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:36.110062 2316740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:36.110072 2316740 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:36.110077 2316740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:36.110322 2316740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:29:36.110768 2316740 out.go:368] Setting JSON to false
	I1101 08:29:36.111624 2316740 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":61922,"bootTime":1761923854,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:29:36.111695 2316740 start.go:143] virtualization:  
	I1101 08:29:36.115791 2316740 out.go:179] * [addons-377223] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:29:36.118478 2316740 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:29:36.118563 2316740 notify.go:221] Checking for updates...
	I1101 08:29:36.123863 2316740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:36.126257 2316740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:29:36.128742 2316740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:29:36.131841 2316740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:29:36.134391 2316740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:29:36.137221 2316740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:36.162701 2316740 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:36.162854 2316740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:36.221359 2316740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 08:29:36.212539631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:36.221462 2316740 docker.go:319] overlay module found
	I1101 08:29:36.224272 2316740 out.go:179] * Using the docker driver based on user configuration
	I1101 08:29:36.226741 2316740 start.go:309] selected driver: docker
	I1101 08:29:36.226758 2316740 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:36.226771 2316740 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:29:36.227508 2316740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:36.295772 2316740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 08:29:36.286686201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:36.295977 2316740 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:36.296225 2316740 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:29:36.299006 2316740 out.go:179] * Using Docker driver with root privileges
	I1101 08:29:36.301728 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:29:36.301792 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:36.301804 2316740 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:36.301880 2316740 start.go:353] cluster config:
	{Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:29:36.304874 2316740 out.go:179] * Starting "addons-377223" primary control-plane node in "addons-377223" cluster
	I1101 08:29:36.307700 2316740 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:29:36.311183 2316740 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:36.313741 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:36.313801 2316740 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 08:29:36.313815 2316740 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:36.313923 2316740 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 08:29:36.313938 2316740 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:29:36.314283 2316740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json ...
	I1101 08:29:36.314311 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json: {Name:mk707a5761aa06a3feb48f1bb35d185f16273e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:36.314478 2316740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:36.329749 2316740 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:36.329895 2316740 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:36.329928 2316740 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:29:36.329937 2316740 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:29:36.329944 2316740 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:29:36.329949 2316740 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:29:53.891088 2316740 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:29:53.891131 2316740 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:29:53.891174 2316740 start.go:360] acquireMachinesLock for addons-377223: {Name:mk565622d540197422d5be45c5a825dc2f42c6dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:53.891293 2316740 start.go:364] duration metric: took 94.536µs to acquireMachinesLock for "addons-377223"
	I1101 08:29:53.891343 2316740 start.go:93] Provisioning new machine with config: &{Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:29:53.891412 2316740 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:29:53.894809 2316740 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:29:53.895048 2316740 start.go:159] libmachine.API.Create for "addons-377223" (driver="docker")
	I1101 08:29:53.895087 2316740 client.go:173] LocalClient.Create starting
	I1101 08:29:53.895211 2316740 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 08:29:54.139129 2316740 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 08:29:54.706440 2316740 cli_runner.go:164] Run: docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:29:54.722711 2316740 cli_runner.go:211] docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:29:54.722796 2316740 network_create.go:284] running [docker network inspect addons-377223] to gather additional debugging logs...
	I1101 08:29:54.722816 2316740 cli_runner.go:164] Run: docker network inspect addons-377223
	W1101 08:29:54.737683 2316740 cli_runner.go:211] docker network inspect addons-377223 returned with exit code 1
	I1101 08:29:54.737715 2316740 network_create.go:287] error running [docker network inspect addons-377223]: docker network inspect addons-377223: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377223 not found
	I1101 08:29:54.737739 2316740 network_create.go:289] output of [docker network inspect addons-377223]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377223 not found
	
	** /stderr **
	I1101 08:29:54.737840 2316740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:54.753822 2316740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b643c0}
	I1101 08:29:54.753868 2316740 network_create.go:124] attempt to create docker network addons-377223 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:29:54.753926 2316740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377223 addons-377223
	I1101 08:29:54.811609 2316740 network_create.go:108] docker network addons-377223 192.168.49.0/24 created
	I1101 08:29:54.811641 2316740 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377223" container
	I1101 08:29:54.811730 2316740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:29:54.827385 2316740 cli_runner.go:164] Run: docker volume create addons-377223 --label name.minikube.sigs.k8s.io=addons-377223 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:29:54.844623 2316740 oci.go:103] Successfully created a docker volume addons-377223
	I1101 08:29:54.844712 2316740 cli_runner.go:164] Run: docker run --rm --name addons-377223-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --entrypoint /usr/bin/test -v addons-377223:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:29:56.882611 2316740 cli_runner.go:217] Completed: docker run --rm --name addons-377223-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --entrypoint /usr/bin/test -v addons-377223:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.037859682s)
	I1101 08:29:56.882642 2316740 oci.go:107] Successfully prepared a docker volume addons-377223
	I1101 08:29:56.882681 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:56.882701 2316740 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:29:56.882758 2316740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:30:01.247442 2316740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.364647165s)
	I1101 08:30:01.247476 2316740 kic.go:203] duration metric: took 4.364771429s to extract preloaded images to volume ...
	W1101 08:30:01.247637 2316740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 08:30:01.247743 2316740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:30:01.327324 2316740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377223 --name addons-377223 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377223 --network addons-377223 --ip 192.168.49.2 --volume addons-377223:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:30:01.675552 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Running}}
	I1101 08:30:01.713691 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:01.736219 2316740 cli_runner.go:164] Run: docker exec addons-377223 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:30:01.794391 2316740 oci.go:144] the created container "addons-377223" has a running status.
	I1101 08:30:01.794420 2316740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa...
	I1101 08:30:01.907364 2316740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:30:01.936719 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:01.976148 2316740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:30:01.976174 2316740 kic_runner.go:114] Args: [docker exec --privileged addons-377223 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:30:02.064436 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:02.097797 2316740 machine.go:94] provisionDockerMachine start ...
	I1101 08:30:02.097927 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:02.130778 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:02.131138 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:02.131152 2316740 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:30:02.133185 2316740 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 08:30:05.283351 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377223
	
	I1101 08:30:05.283384 2316740 ubuntu.go:182] provisioning hostname "addons-377223"
	I1101 08:30:05.283446 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:05.300293 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:05.300606 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:05.300621 2316740 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377223 && echo "addons-377223" | sudo tee /etc/hostname
	I1101 08:30:05.456481 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377223
	
	I1101 08:30:05.456608 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:05.474428 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:05.474744 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:05.474765 2316740 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377223/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:30:05.619700 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:30:05.619726 2316740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 08:30:05.619747 2316740 ubuntu.go:190] setting up certificates
	I1101 08:30:05.619756 2316740 provision.go:84] configureAuth start
	I1101 08:30:05.619815 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:05.636457 2316740 provision.go:143] copyHostCerts
	I1101 08:30:05.636535 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 08:30:05.636665 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 08:30:05.636730 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 08:30:05.636782 2316740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.addons-377223 san=[127.0.0.1 192.168.49.2 addons-377223 localhost minikube]
	I1101 08:30:06.119766 2316740 provision.go:177] copyRemoteCerts
	I1101 08:30:06.119834 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:30:06.119894 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.136805 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.238924 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 08:30:06.255259 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:30:06.271607 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:30:06.287933 2316740 provision.go:87] duration metric: took 668.068135ms to configureAuth
	I1101 08:30:06.287959 2316740 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:30:06.288184 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:06.288302 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.305804 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:06.306108 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:06.306128 2316740 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:30:06.554710 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:30:06.554734 2316740 machine.go:97] duration metric: took 4.456914687s to provisionDockerMachine
	I1101 08:30:06.554742 2316740 client.go:176] duration metric: took 12.65964649s to LocalClient.Create
	I1101 08:30:06.554758 2316740 start.go:167] duration metric: took 12.659708199s to libmachine.API.Create "addons-377223"
	I1101 08:30:06.554765 2316740 start.go:293] postStartSetup for "addons-377223" (driver="docker")
	I1101 08:30:06.554775 2316740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:30:06.554849 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:30:06.554896 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.573206 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.675389 2316740 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:30:06.678533 2316740 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:30:06.678557 2316740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:30:06.678567 2316740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 08:30:06.678631 2316740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 08:30:06.678654 2316740 start.go:296] duration metric: took 123.883271ms for postStartSetup
	I1101 08:30:06.678955 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:06.695040 2316740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json ...
	I1101 08:30:06.695307 2316740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:30:06.695345 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.711415 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.812378 2316740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:30:06.816651 2316740 start.go:128] duration metric: took 12.925225236s to createHost
	I1101 08:30:06.816671 2316740 start.go:83] releasing machines lock for "addons-377223", held for 12.925351748s
	I1101 08:30:06.816737 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:06.832976 2316740 ssh_runner.go:195] Run: cat /version.json
	I1101 08:30:06.833028 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.833104 2316740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:30:06.833163 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.856390 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.863964 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:07.045805 2316740 ssh_runner.go:195] Run: systemctl --version
	I1101 08:30:07.051842 2316740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:30:07.088918 2316740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:30:07.092988 2316740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:30:07.093061 2316740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:30:07.120631 2316740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 08:30:07.120698 2316740 start.go:496] detecting cgroup driver to use...
	I1101 08:30:07.120743 2316740 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 08:30:07.120835 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:30:07.136999 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:30:07.149289 2316740 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:30:07.149351 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:30:07.166366 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:30:07.184170 2316740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:30:07.306023 2316740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:30:07.420575 2316740 docker.go:234] disabling docker service ...
	I1101 08:30:07.420692 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:30:07.442915 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:30:07.455407 2316740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:30:07.564072 2316740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:30:07.684628 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:30:07.696736 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:30:07.709749 2316740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:30:07.709828 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.718138 2316740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:30:07.718223 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.726326 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.734411 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.742491 2316740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:30:07.750243 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.758370 2316740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.770885 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.779197 2316740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:30:07.786567 2316740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:30:07.793795 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:07.896544 2316740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:30:08.016532 2316740 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:30:08.016642 2316740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:30:08.020591 2316740 start.go:564] Will wait 60s for crictl version
	I1101 08:30:08.020701 2316740 ssh_runner.go:195] Run: which crictl
	I1101 08:30:08.024572 2316740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:30:08.048185 2316740 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:30:08.048339 2316740 ssh_runner.go:195] Run: crio --version
	I1101 08:30:08.075422 2316740 ssh_runner.go:195] Run: crio --version
	I1101 08:30:08.110012 2316740 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:30:08.112819 2316740 cli_runner.go:164] Run: docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:30:08.128321 2316740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:30:08.132028 2316740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:30:08.141985 2316740 kubeadm.go:884] updating cluster {Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:30:08.142142 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:30:08.142200 2316740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:30:08.172038 2316740 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:30:08.172064 2316740 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:30:08.172127 2316740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:30:08.197357 2316740 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:30:08.197382 2316740 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:30:08.197389 2316740 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:30:08.197507 2316740 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:30:08.197600 2316740 ssh_runner.go:195] Run: crio config
	I1101 08:30:08.262465 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:30:08.262538 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:30:08.262573 2316740 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:30:08.262624 2316740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377223 NodeName:addons-377223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:30:08.262769 2316740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:30:08.262865 2316740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:30:08.270486 2316740 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:30:08.270584 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:30:08.278074 2316740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:30:08.290498 2316740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:30:08.303069 2316740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 08:30:08.315750 2316740 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:30:08.319175 2316740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:30:08.328992 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:08.442844 2316740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:08.458462 2316740 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223 for IP: 192.168.49.2
	I1101 08:30:08.458481 2316740 certs.go:195] generating shared ca certs ...
	I1101 08:30:08.458497 2316740 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:08.458618 2316740 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 08:30:09.004054 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt ...
	I1101 08:30:09.004101 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt: {Name:mkb30c251a0186d14ca3dc95f9f38db60acf13e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.004336 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key ...
	I1101 08:30:09.004354 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key: {Name:mk676e72c64736a65b6cd527cf9a075dbc322d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.004439 2316740 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 08:30:09.317720 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt ...
	I1101 08:30:09.317753 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt: {Name:mk097382b33d757885fbe3314ac20d0d846a401f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.317959 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key ...
	I1101 08:30:09.317973 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key: {Name:mk487123a20a0843902554f556877d9e807297c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.318066 2316740 certs.go:257] generating profile certs ...
	I1101 08:30:09.318127 2316740 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key
	I1101 08:30:09.318145 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt with IP's: []
	I1101 08:30:10.095776 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt ...
	I1101 08:30:10.095817 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: {Name:mk2e12a5ee979e835444f26baf6cea16dadadded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.096039 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key ...
	I1101 08:30:10.096052 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key: {Name:mk32d7b806304f01fbf6fcad8c77561a2f7e70cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.096147 2316740 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a
	I1101 08:30:10.096168 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:30:10.181292 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a ...
	I1101 08:30:10.181346 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a: {Name:mk9864f04219f2e56a48a1df299509615ad1f08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.181518 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a ...
	I1101 08:30:10.181532 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a: {Name:mka4a060e4e5958e4895fbd15cf4a7dc9b680a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.181617 2316740 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt
	I1101 08:30:10.181695 2316740 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key
	I1101 08:30:10.181752 2316740 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key
	I1101 08:30:10.181773 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt with IP's: []
	I1101 08:30:10.721386 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt ...
	I1101 08:30:10.721418 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt: {Name:mk273d3f416e5e8e0db2b485fbe082b549ff7a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.721594 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key ...
	I1101 08:30:10.721607 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key: {Name:mk7931d74d94975d33ebde71a1fe88fe631527fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.721787 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 08:30:10.721824 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 08:30:10.721851 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:30:10.721881 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 08:30:10.722414 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:30:10.739546 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 08:30:10.756932 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:30:10.776841 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:30:10.795266 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:30:10.814465 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:30:10.831014 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:30:10.847300 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 08:30:10.864557 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:30:10.881633 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:30:10.894009 2316740 ssh_runner.go:195] Run: openssl version
	I1101 08:30:10.899973 2316740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:30:10.908293 2316740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.911732 2316740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.911794 2316740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.952071 2316740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:30:10.960012 2316740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:30:10.963292 2316740 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:30:10.963378 2316740 kubeadm.go:401] StartCluster: {Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:30:10.963467 2316740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:30:10.963522 2316740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:30:10.992650 2316740 cri.go:89] found id: ""
	I1101 08:30:10.992717 2316740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:30:11.000343 2316740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:30:11.009304 2316740 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:30:11.009382 2316740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:30:11.017505 2316740 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:30:11.017525 2316740 kubeadm.go:158] found existing configuration files:
	
	I1101 08:30:11.017575 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:30:11.025496 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:30:11.025560 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:30:11.032631 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:30:11.039948 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:30:11.040010 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:30:11.047169 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:30:11.054885 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:30:11.054951 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:30:11.062574 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:30:11.070347 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:30:11.070416 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:30:11.077975 2316740 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:30:11.118602 2316740 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:30:11.118921 2316740 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:30:11.147936 2316740 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:30:11.148032 2316740 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 08:30:11.148073 2316740 kubeadm.go:319] OS: Linux
	I1101 08:30:11.148146 2316740 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:30:11.148210 2316740 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 08:30:11.148279 2316740 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:30:11.148353 2316740 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:30:11.148422 2316740 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:30:11.148489 2316740 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:30:11.148556 2316740 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:30:11.148621 2316740 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:30:11.148679 2316740 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 08:30:11.218173 2316740 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:30:11.218299 2316740 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:30:11.218445 2316740 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:30:11.225738 2316740 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:30:11.228640 2316740 out.go:252]   - Generating certificates and keys ...
	I1101 08:30:11.228801 2316740 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:30:11.228913 2316740 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:30:12.642829 2316740 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:30:13.075401 2316740 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:30:13.753990 2316740 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:30:14.509744 2316740 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:30:15.043006 2316740 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:30:15.043165 2316740 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377223 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:30:15.546346 2316740 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:30:15.546501 2316740 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377223 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:30:16.764193 2316740 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:30:17.020568 2316740 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:30:17.749115 2316740 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:30:17.749443 2316740 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:30:18.236842 2316740 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:30:18.928577 2316740 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:30:19.690810 2316740 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:30:19.900238 2316740 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:30:20.084810 2316740 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:30:20.085593 2316740 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:30:20.088398 2316740 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:30:20.092011 2316740 out.go:252]   - Booting up control plane ...
	I1101 08:30:20.092126 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:30:20.092208 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:30:20.092278 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:30:20.108811 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:30:20.109135 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:30:20.116886 2316740 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:30:20.117184 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:30:20.117486 2316740 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:30:20.257388 2316740 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:30:20.257549 2316740 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:30:21.258606 2316740 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001678313s
	I1101 08:30:21.262995 2316740 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:30:21.263123 2316740 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:30:21.263245 2316740 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:30:21.263384 2316740 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:30:24.160269 2316740 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.89718019s
	I1101 08:30:26.828449 2316740 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.565805378s
	I1101 08:30:27.264807 2316740 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002062042s
	I1101 08:30:27.287387 2316740 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:30:27.300708 2316740 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:30:27.323512 2316740 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:30:27.323735 2316740 kubeadm.go:319] [mark-control-plane] Marking the node addons-377223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:30:27.338362 2316740 kubeadm.go:319] [bootstrap-token] Using token: j41a3s.jdvrqm41b2wdvu6m
	I1101 08:30:27.341431 2316740 out.go:252]   - Configuring RBAC rules ...
	I1101 08:30:27.341554 2316740 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:30:27.349284 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:30:27.357187 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:30:27.361403 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:30:27.368716 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:30:27.372445 2316740 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:30:27.673508 2316740 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:30:28.130883 2316740 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:30:28.671347 2316740 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:30:28.672477 2316740 kubeadm.go:319] 
	I1101 08:30:28.672554 2316740 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:30:28.672560 2316740 kubeadm.go:319] 
	I1101 08:30:28.672641 2316740 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:30:28.672646 2316740 kubeadm.go:319] 
	I1101 08:30:28.672672 2316740 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:30:28.672734 2316740 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:30:28.672787 2316740 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:30:28.672791 2316740 kubeadm.go:319] 
	I1101 08:30:28.672848 2316740 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:30:28.672853 2316740 kubeadm.go:319] 
	I1101 08:30:28.672903 2316740 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:30:28.672908 2316740 kubeadm.go:319] 
	I1101 08:30:28.672962 2316740 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:30:28.673040 2316740 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:30:28.673111 2316740 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:30:28.673115 2316740 kubeadm.go:319] 
	I1101 08:30:28.673204 2316740 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:30:28.673285 2316740 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:30:28.673289 2316740 kubeadm.go:319] 
	I1101 08:30:28.673395 2316740 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j41a3s.jdvrqm41b2wdvu6m \
	I1101 08:30:28.673504 2316740 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 08:30:28.673526 2316740 kubeadm.go:319] 	--control-plane 
	I1101 08:30:28.673530 2316740 kubeadm.go:319] 
	I1101 08:30:28.673619 2316740 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:30:28.673623 2316740 kubeadm.go:319] 
	I1101 08:30:28.673709 2316740 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j41a3s.jdvrqm41b2wdvu6m \
	I1101 08:30:28.673817 2316740 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 08:30:28.675659 2316740 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 08:30:28.675916 2316740 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 08:30:28.676027 2316740 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:30:28.676059 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:30:28.676068 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:30:28.679180 2316740 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:30:28.682199 2316740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:30:28.686104 2316740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:30:28.686164 2316740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:30:28.698831 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:30:28.982137 2316740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:30:28.982230 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:28.982279 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377223 minikube.k8s.io/updated_at=2025_11_01T08_30_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-377223 minikube.k8s.io/primary=true
	I1101 08:30:29.127702 2316740 ops.go:34] apiserver oom_adj: -16
	I1101 08:30:29.127830 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:29.628284 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:30.128002 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:30.628236 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:31.128448 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:31.628092 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:32.128948 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:32.627940 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.128002 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.627999 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.732935 2316740 kubeadm.go:1114] duration metric: took 4.750760568s to wait for elevateKubeSystemPrivileges
	I1101 08:30:33.732970 2316740 kubeadm.go:403] duration metric: took 22.76959635s to StartCluster
	I1101 08:30:33.732987 2316740 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:33.733096 2316740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:30:33.733554 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:33.733744 2316740 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:30:33.733872 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:30:33.734105 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:33.734133 2316740 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:30:33.734226 2316740 addons.go:70] Setting yakd=true in profile "addons-377223"
	I1101 08:30:33.734240 2316740 addons.go:239] Setting addon yakd=true in "addons-377223"
	I1101 08:30:33.734262 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.734819 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.735353 2316740 addons.go:70] Setting metrics-server=true in profile "addons-377223"
	I1101 08:30:33.735379 2316740 addons.go:239] Setting addon metrics-server=true in "addons-377223"
	I1101 08:30:33.735401 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.735792 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.735961 2316740 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377223"
	I1101 08:30:33.735983 2316740 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377223"
	I1101 08:30:33.736029 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.736447 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.737250 2316740 addons.go:70] Setting registry=true in profile "addons-377223"
	I1101 08:30:33.737305 2316740 addons.go:239] Setting addon registry=true in "addons-377223"
	I1101 08:30:33.737343 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.737853 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.738447 2316740 addons.go:70] Setting registry-creds=true in profile "addons-377223"
	I1101 08:30:33.738475 2316740 addons.go:239] Setting addon registry-creds=true in "addons-377223"
	I1101 08:30:33.738506 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.738901 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.739947 2316740 out.go:179] * Verifying Kubernetes components...
	I1101 08:30:33.747942 2316740 addons.go:70] Setting storage-provisioner=true in profile "addons-377223"
	I1101 08:30:33.748014 2316740 addons.go:239] Setting addon storage-provisioner=true in "addons-377223"
	I1101 08:30:33.748084 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.748516 2316740 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377223"
	I1101 08:30:33.748545 2316740 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377223"
	I1101 08:30:33.748769 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.751054 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.754237 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766175 2316740 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377223"
	I1101 08:30:33.766196 2316740 addons.go:70] Setting default-storageclass=true in profile "addons-377223"
	I1101 08:30:33.766209 2316740 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377223"
	I1101 08:30:33.766223 2316740 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377223"
	I1101 08:30:33.766625 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766727 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.768301 2316740 addons.go:70] Setting gcp-auth=true in profile "addons-377223"
	I1101 08:30:33.768337 2316740 mustload.go:66] Loading cluster: addons-377223
	I1101 08:30:33.768555 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:33.768795 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.784010 2316740 addons.go:70] Setting ingress=true in profile "addons-377223"
	I1101 08:30:33.784083 2316740 addons.go:239] Setting addon ingress=true in "addons-377223"
	I1101 08:30:33.784129 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.784777 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766181 2316740 addons.go:70] Setting cloud-spanner=true in profile "addons-377223"
	I1101 08:30:33.791499 2316740 addons.go:239] Setting addon cloud-spanner=true in "addons-377223"
	I1101 08:30:33.791697 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.766190 2316740 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377223"
	I1101 08:30:33.797565 2316740 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377223"
	I1101 08:30:33.797626 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.798201 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.808295 2316740 addons.go:70] Setting ingress-dns=true in profile "addons-377223"
	I1101 08:30:33.808344 2316740 addons.go:239] Setting addon ingress-dns=true in "addons-377223"
	I1101 08:30:33.808387 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.808860 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.814898 2316740 addons.go:70] Setting volcano=true in profile "addons-377223"
	I1101 08:30:33.814989 2316740 addons.go:239] Setting addon volcano=true in "addons-377223"
	I1101 08:30:33.815072 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.815810 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.826738 2316740 addons.go:70] Setting inspektor-gadget=true in profile "addons-377223"
	I1101 08:30:33.826791 2316740 addons.go:239] Setting addon inspektor-gadget=true in "addons-377223"
	I1101 08:30:33.826826 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.827467 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.842040 2316740 addons.go:70] Setting volumesnapshots=true in profile "addons-377223"
	I1101 08:30:33.842108 2316740 addons.go:239] Setting addon volumesnapshots=true in "addons-377223"
	I1101 08:30:33.842155 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.842652 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.853046 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:33.871964 2316740 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:30:33.875027 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:33.875094 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:30:33.875176 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.912635 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.937128 2316740 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:30:33.940610 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:30:33.940672 2316740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:30:33.940736 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.940901 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:30:33.945933 2316740 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:30:33.951679 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:30:33.951703 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:30:33.951792 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.997699 2316740 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:30:33.997900 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:30:34.019894 2316740 addons.go:239] Setting addon default-storageclass=true in "addons-377223"
	I1101 08:30:34.020000 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.042883 2316740 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:30:34.044715 2316740 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:30:34.046000 2316740 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:34.046153 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:30:34.046322 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.054282 2316740 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377223"
	I1101 08:30:34.054324 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.054752 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:34.055191 2316740 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1101 08:30:34.055909 2316740 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:30:34.056640 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:34.065606 2316740 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:34.065627 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:30:34.065697 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.077763 2316740 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:34.077788 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:30:34.077862 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.087530 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:34.091293 2316740 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:34.091319 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:30:34.091386 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.094258 2316740 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:30:34.094491 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:30:34.094505 2316740 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:30:34.094582 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.097994 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:30:34.099213 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:30:34.102743 2316740 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:30:34.103082 2316740 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:30:34.102936 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.104810 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:30:34.137226 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.154749 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:30:34.155007 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:30:34.156266 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.170719 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:30:34.171616 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:34.171762 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:30:34.171777 2316740 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:30:34.171858 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.177719 2316740 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:30:34.179566 2316740 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:34.179593 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:30:34.179663 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.197789 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:30:34.198154 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.198872 2316740 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:34.198887 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:30:34.198940 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.208551 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:30:34.210914 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.212547 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.217087 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:30:34.221153 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.225098 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:30:34.225261 2316740 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:30:34.228579 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:30:34.232166 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:30:34.232192 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:30:34.232258 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.232477 2316740 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:30:34.236307 2316740 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:34.240418 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:30:34.240545 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.312389 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.343911 2316740 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:34.343930 2316740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:30:34.343991 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.354044 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.360139 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.363030 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.378878 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.393446 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.394948 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	W1101 08:30:34.408171 2316740 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:34.408266 2316740 retry.go:31] will retry after 222.533945ms: ssh: handshake failed: EOF
	I1101 08:30:34.411787 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.423273 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	W1101 08:30:34.424582 2316740 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:34.424603 2316740 retry.go:31] will retry after 192.804546ms: ssh: handshake failed: EOF
	I1101 08:30:34.431188 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.453083 2316740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:34.884643 2316740 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:34.884706 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:30:34.934383 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:35.013658 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:35.032088 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:35.054602 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:30:35.054621 2316740 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:30:35.058160 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:35.094505 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:30:35.094580 2316740 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:30:35.109917 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:35.124477 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:30:35.124605 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:30:35.209364 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:30:35.209441 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:30:35.249185 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:35.249270 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:30:35.250001 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:35.269393 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:35.272211 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:35.301616 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:35.304893 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:30:35.304962 2316740 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:30:35.339749 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:35.342628 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:30:35.342696 2316740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:30:35.413354 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:30:35.413426 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:30:35.430555 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:35.467997 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:30:35.468078 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:30:35.471113 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:30:35.471181 2316740 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:30:35.506612 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:35.506687 2316740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:30:35.558851 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:30:35.558923 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:30:35.632166 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:35.632237 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:30:35.648025 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:30:35.648106 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:30:35.674513 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:35.770806 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:30:35.770882 2316740 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:30:35.818247 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:35.855259 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:30:35.855331 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:30:35.887523 2316740 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.788278629s)
	I1101 08:30:35.887687 2316740 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:30:35.887612 2316740 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.434508358s)
	I1101 08:30:35.889241 2316740 node_ready.go:35] waiting up to 6m0s for node "addons-377223" to be "Ready" ...
	I1101 08:30:35.977095 2316740 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:35.977113 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:30:36.037749 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:30:36.037824 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:30:36.354862 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:30:36.354935 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:30:36.396849 2316740 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377223" context rescaled to 1 replicas
	I1101 08:30:36.439499 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:36.566613 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:30:36.566681 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:30:36.722152 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:30:36.722225 2316740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:30:36.888321 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:30:36.888392 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:30:37.034383 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:30:37.034457 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:30:37.230972 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:37.231033 2316740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:30:37.453073 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 08:30:37.906539 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:38.706825 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.674665709s)
	I1101 08:30:38.706923 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.693239231s)
	I1101 08:30:38.869227 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.81102223s)
	W1101 08:30:38.869311 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:38.869342 2316740 retry.go:31] will retry after 222.053822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:38.869419 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.759429204s)
	I1101 08:30:39.092424 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:39.779181 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.509706858s)
	I1101 08:30:39.779236 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.506970965s)
	I1101 08:30:39.779280 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.477593247s)
	I1101 08:30:39.779451 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.439635428s)
	I1101 08:30:39.779499 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.348886327s)
	I1101 08:30:39.779511 2316740 addons.go:480] Verifying addon registry=true in "addons-377223"
	I1101 08:30:39.779724 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.105140737s)
	I1101 08:30:39.779738 2316740 addons.go:480] Verifying addon metrics-server=true in "addons-377223"
	I1101 08:30:39.779773 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.96146379s)
	I1101 08:30:39.780481 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.53042183s)
	I1101 08:30:39.780503 2316740 addons.go:480] Verifying addon ingress=true in "addons-377223"
	I1101 08:30:39.783829 2316740 out.go:179] * Verifying ingress addon...
	I1101 08:30:39.783870 2316740 out.go:179] * Verifying registry addon...
	I1101 08:30:39.783944 2316740 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377223 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:30:39.788345 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:30:39.788413 2316740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:30:39.805913 2316740 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:30:39.805933 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.813904 2316740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:39.813928 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.889879 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.450289844s)
	W1101 08:30:39.889917 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:39.889936 2316740 retry.go:31] will retry after 371.961333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1101 08:30:39.908815 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:40.262476 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:40.297080 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.297268 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.521352 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.068189344s)
	I1101 08:30:40.521387 2316740 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-377223"
	I1101 08:30:40.525905 2316740 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:30:40.529786 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:30:40.539331 2316740 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:40.539355 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.622877 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.530362405s)
	W1101 08:30:40.622921 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:40.622941 2316740 retry.go:31] will retry after 235.820561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:40.794876 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.795117 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.859343 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:41.033347 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.293843 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.294923 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.533738 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.773479 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:30:41.773585 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:41.797770 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:41.798731 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.800281 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.912339 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:30:41.924677 2316740 addons.go:239] Setting addon gcp-auth=true in "addons-377223"
	I1101 08:30:41.924724 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:41.925159 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:41.941635 2316740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:30:41.941700 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:41.958999 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:42.035039 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.292758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.293333 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:42.393200 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:42.533221 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.798770 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.798971 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.033379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.137272 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.874746322s)
	I1101 08:30:43.137359 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.277989671s)
	W1101 08:30:43.137384 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.137402 2316740 retry.go:31] will retry after 281.783242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.137439 2316740 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.195787148s)
	I1101 08:30:43.140600 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:30:43.143478 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:43.146218 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:30:43.146244 2316740 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:30:43.163955 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:30:43.163982 2316740 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:30:43.176557 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:43.176580 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:30:43.189931 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:43.292867 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.293233 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.419979 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:43.533607 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.739045 2316740 addons.go:480] Verifying addon gcp-auth=true in "addons-377223"
	I1101 08:30:43.743517 2316740 out.go:179] * Verifying gcp-auth addon...
	I1101 08:30:43.747123 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:30:43.762688 2316740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:30:43.762713 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.862021 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.862490 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.033920 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.249955 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.292271 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.292588 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:44.326538 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:44.326624 2316740 retry.go:31] will retry after 469.798153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:44.533915 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.749845 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.792173 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.792346 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.797588 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:44.892723 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:45.048267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.252460 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.294692 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.296176 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.533033 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:45.721919 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:45.721952 2316740 retry.go:31] will retry after 734.26527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:45.750557 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.791702 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.792046 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.032909 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.250927 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.292762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.292813 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.456852 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:46.533185 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.749831 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.793574 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.794159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.033961 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.250011 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:47.255507 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:47.255537 2316740 retry.go:31] will retry after 1.610799864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:47.291513 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.292218 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:47.395024 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:47.533334 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.750662 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.792095 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.792227 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.033721 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.250622 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.291758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.291923 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.535961 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.751224 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.792335 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.792539 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.866871 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:49.033484 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.251004 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.293785 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.294082 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.533619 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:49.650997 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:49.651027 2316740 retry.go:31] will retry after 1.785530818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:49.749687 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.791889 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.791978 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:49.892704 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:50.032880 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.249921 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.291882 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.292073 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.533288 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.750175 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.792615 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.792797 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.033356 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.250650 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.291354 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.291493 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.436749 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:51.533306 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.750649 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.793911 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.794417 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.033683 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.251195 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:52.263502 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:52.263530 2316740 retry.go:31] will retry after 4.188195922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:52.291843 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.292187 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:52.392693 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:52.532509 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.750505 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.791268 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.791406 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.033116 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.250088 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.292087 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.292278 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.533006 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.750794 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.791583 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.792782 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.033302 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.250374 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.292315 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.292480 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.533477 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.750578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.791364 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.791686 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:54.892341 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:55.034703 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.251305 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.291605 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.291653 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.533483 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.750817 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.792575 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.792894 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.033049 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.250020 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.292268 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.292376 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.452177 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:56.538248 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.750616 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.792269 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.793094 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:56.893223 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:57.034765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.250940 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.292191 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.293635 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:57.315874 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.315953 2316740 retry.go:31] will retry after 3.238426466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.533453 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.750209 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.792660 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.793083 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.033543 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.250736 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.291946 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.292050 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.534250 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.751028 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.792455 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.792857 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.032442 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.250798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.291606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.292281 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:59.392209 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:59.532999 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.750689 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.792177 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.792344 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.057057 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.252068 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.294341 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.299970 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.533339 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.555370 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:00.750770 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.792344 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.792369 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.034240 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.251685 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.293336 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.294231 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:31:01.369409 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.369488 2316740 retry.go:31] will retry after 12.115012737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:31:01.392942 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:01.532563 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.751251 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.792379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:01.792776 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.033719 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.250731 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.291457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.291558 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.533924 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.750645 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.792306 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.796628 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.033037 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.250381 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.292302 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.292512 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.533422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.750316 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.792693 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.792900 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:03.892553 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:04.033617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.250397 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.292814 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.293086 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.532566 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.750698 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.791714 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.791912 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.033368 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.250383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.294379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.295203 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.533157 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.750537 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.791979 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.792270 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:05.893009 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:06.033143 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.250557 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.291469 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.291800 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.533767 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.750751 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.791801 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.792109 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.033108 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.250010 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.292088 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.292542 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.534711 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.750352 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.791384 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.791642 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.033380 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.251418 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.292096 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.292238 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:08.393007 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:08.536297 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.750606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.791722 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.792316 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.033844 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.250841 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.291902 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.292104 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.533356 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.750294 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.791933 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.792664 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.033938 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.250646 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.291606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:10.291964 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.533375 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.750444 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.791711 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:10.791949 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:10.892850 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:11.032857 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.249881 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.291904 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:11.292159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.532691 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.750460 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.791724 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:11.792093 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.033699 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.250569 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.291406 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:12.291650 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.533457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.750648 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.791547 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:12.791682 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.033425 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.250260 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.291430 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:13.291626 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:13.392627 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:13.484854 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:13.533260 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.750066 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.793601 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.794099 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.033898 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.250202 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:31:14.280255 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:14.280287 2316740 retry.go:31] will retry after 14.15849595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:14.291879 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.292310 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.532995 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.750891 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.792289 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.792324 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.032898 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.265130 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.306293 2316740 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:31:15.306317 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:15.315194 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.397234 2316740 node_ready.go:49] node "addons-377223" is "Ready"
	I1101 08:31:15.397262 2316740 node_ready.go:38] duration metric: took 39.50787249s for node "addons-377223" to be "Ready" ...
	I1101 08:31:15.397275 2316740 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:31:15.397334 2316740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:31:15.411805 2316740 api_server.go:72] duration metric: took 41.678035065s to wait for apiserver process to appear ...
	I1101 08:31:15.411830 2316740 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:31:15.411884 2316740 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:31:15.423082 2316740 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:31:15.426709 2316740 api_server.go:141] control plane version: v1.34.1
	I1101 08:31:15.426734 2316740 api_server.go:131] duration metric: took 14.896769ms to wait for apiserver health ...
	I1101 08:31:15.426743 2316740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:31:15.465863 2316740 system_pods.go:59] 19 kube-system pods found
	I1101 08:31:15.465900 2316740 system_pods.go:61] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending
	I1101 08:31:15.465908 2316740 system_pods.go:61] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending
	I1101 08:31:15.465946 2316740 system_pods.go:61] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.465958 2316740 system_pods.go:61] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.465965 2316740 system_pods.go:61] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.465969 2316740 system_pods.go:61] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.465980 2316740 system_pods.go:61] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.465985 2316740 system_pods.go:61] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.465990 2316740 system_pods.go:61] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending
	I1101 08:31:15.466017 2316740 system_pods.go:61] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.466035 2316740 system_pods.go:61] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.466046 2316740 system_pods.go:61] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending
	I1101 08:31:15.466051 2316740 system_pods.go:61] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.466059 2316740 system_pods.go:61] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.466068 2316740 system_pods.go:61] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending
	I1101 08:31:15.466074 2316740 system_pods.go:61] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.466078 2316740 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending
	I1101 08:31:15.466088 2316740 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending
	I1101 08:31:15.466094 2316740 system_pods.go:61] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.466113 2316740 system_pods.go:74] duration metric: took 39.343916ms to wait for pod list to return data ...
	I1101 08:31:15.466123 2316740 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:31:15.525542 2316740 default_sa.go:45] found service account: "default"
	I1101 08:31:15.525568 2316740 default_sa.go:55] duration metric: took 59.429847ms for default service account to be created ...
	I1101 08:31:15.525579 2316740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:31:15.597833 2316740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:31:15.597859 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.605078 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:15.605113 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending
	I1101 08:31:15.605121 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending
	I1101 08:31:15.605157 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.605171 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.605177 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.605182 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.605187 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.605198 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.605202 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending
	I1101 08:31:15.605206 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.605226 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.605244 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:15.605249 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.605260 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.605264 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending
	I1101 08:31:15.605275 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.605279 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending
	I1101 08:31:15.605284 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending
	I1101 08:31:15.605307 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.605330 2316740 retry.go:31] will retry after 227.808824ms: missing components: kube-dns
	I1101 08:31:15.753002 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.796467 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:15.796708 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.843706 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:15.843745 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:31:15.843754 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:15.843761 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.843795 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.843800 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.843805 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.843810 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.843814 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.843824 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:15.843828 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.843834 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.843841 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:15.843875 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.843882 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.843900 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:15.843917 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.843925 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:15.843932 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:15.843942 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.843958 2316740 retry.go:31] will retry after 263.73777ms: missing components: kube-dns
	I1101 08:31:16.034162 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.140994 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:16.141034 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:31:16.141043 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:16.141085 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:16.141101 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:31:16.141106 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:16.141112 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:16.141116 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:16.141122 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:16.141136 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:16.141160 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:16.141166 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:16.141172 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:16.141180 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:31:16.141196 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:16.141211 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:16.141217 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:31:16.141230 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.141239 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.141247 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:16.141273 2316740 retry.go:31] will retry after 339.770132ms: missing components: kube-dns
	I1101 08:31:16.250768 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.316421 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:16.316630 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.525141 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:16.525181 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Running
	I1101 08:31:16.525193 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:16.525243 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:16.525258 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:31:16.525264 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:16.525276 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:16.525280 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:16.525285 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:16.525314 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:16.525318 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:16.525340 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:16.525348 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:16.525361 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:31:16.525368 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:16.525378 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:16.525385 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:31:16.525406 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.525421 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.525426 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Running
	I1101 08:31:16.525439 2316740 system_pods.go:126] duration metric: took 999.854307ms to wait for k8s-apps to be running ...
	I1101 08:31:16.525447 2316740 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:31:16.525517 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:31:16.545979 2316740 system_svc.go:56] duration metric: took 20.523276ms WaitForService to wait for kubelet
	I1101 08:31:16.546051 2316740 kubeadm.go:587] duration metric: took 42.812285127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:31:16.546086 2316740 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:31:16.549589 2316740 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 08:31:16.549668 2316740 node_conditions.go:123] node cpu capacity is 2
	I1101 08:31:16.549695 2316740 node_conditions.go:105] duration metric: took 3.58939ms to run NodePressure ...
	I1101 08:31:16.549720 2316740 start.go:242] waiting for startup goroutines ...
	I1101 08:31:16.608198 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.750519 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.793566 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:16.794208 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.036434 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.251118 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.293917 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.294332 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:17.535410 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.750727 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.791750 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:17.791951 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.034026 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.251552 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.292949 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:18.294102 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.541038 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.750444 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.793527 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:18.793946 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.034127 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:19.250603 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.293542 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:19.294068 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.534417 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:19.750706 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.793255 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:19.793674 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.034604 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:20.251083 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.293556 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:20.293715 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.532720 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:20.750644 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.792798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:20.793122 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.033835 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:21.250119 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.293688 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.294058 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:21.534033 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:21.750890 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.792512 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.792617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:22.034118 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:22.250621 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:22.293339 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.293762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:22.533664 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:22.750845 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:22.793118 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.793478 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:23.035342 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:23.250471 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.292719 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:23.293439 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.534283 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:23.750126 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.792263 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.792751 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.033954 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:24.251228 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.293185 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:24.293605 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.533430 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:24.750544 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.792879 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.793278 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.033819 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:25.252333 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.294583 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.295058 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:25.536321 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:25.752243 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.793590 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:25.793846 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:26.034046 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:26.251568 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:26.358656 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:26.359082 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:26.534713 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:26.751221 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:26.792727 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:26.797385 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:27.034273 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:27.250930 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.293859 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:27.294132 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:27.534362 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:27.750862 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.793499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:27.794072 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.034096 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:28.251436 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.292195 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:28.292215 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.439581 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:28.536990 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:28.750765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.792393 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.792799 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:29.033934 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:29.250267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:31:29.297497 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:29.297573 2316740 retry.go:31] will retry after 19.615074116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:29.298115 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:29.298319 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:29.533315 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:29.750232 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:29.792440 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:29.792875 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:30.034607 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:30.251563 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:30.293617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:30.294132 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:30.534104 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:30.750499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:30.792743 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:30.792897 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.033200 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:31.250569 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:31.292714 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.292864 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:31.533121 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:31.750109 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:31.792895 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.793145 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:32.033253 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:32.250395 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:32.293486 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:32.293924 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:32.533433 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:32.750246 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:32.791146 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:32.791482 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.034079 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:33.250434 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:33.291685 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.292348 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:33.534082 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:33.750934 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:33.792617 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.793609 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:34.033976 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:34.250734 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:34.292567 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:34.292784 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:34.533297 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:34.750127 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:34.791807 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:34.792510 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:35.034443 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:35.250456 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:35.292413 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:35.292717 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:35.534690 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:35.751158 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:35.793203 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:35.793532 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.034489 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:36.250450 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:36.291998 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:36.292104 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.533502 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:36.750393 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:36.791669 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.792025 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.033596 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:37.250292 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:37.291765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:37.291995 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.533134 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:37.749986 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:37.792800 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.793015 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:38.034384 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:38.250445 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:38.293013 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:38.293315 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:38.537610 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:38.750380 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:38.792001 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:38.792513 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:39.033999 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:39.250450 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:39.291789 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:39.292236 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:39.533684 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:39.750390 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:39.793330 2316740 kapi.go:107] duration metric: took 1m0.004984544s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:31:39.793807 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:40.033823 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:40.250865 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:40.292131 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:40.533889 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:40.750578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:40.791336 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:41.033856 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:41.249905 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:41.292728 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:41.536927 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:41.750075 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:41.792360 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:42.034502 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:42.251398 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:42.291966 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:42.534783 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:42.750939 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:42.791874 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:43.044426 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:43.254267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:43.292838 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:43.535085 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:43.751271 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:43.852595 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:44.033031 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:44.250876 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:44.292497 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:44.535011 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:44.749738 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:44.791710 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:45.034208 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:45.252580 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:45.303373 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:45.533980 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:45.750383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:45.791263 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:46.033975 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:46.250357 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:46.295420 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:46.537383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:46.750030 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:46.795072 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:47.033954 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:47.251620 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:47.306644 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:47.533405 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:47.749884 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:47.824108 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.034366 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:48.250186 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:48.292156 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.537617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:48.750208 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:48.792145 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.913536 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:49.033285 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:49.250422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:49.292270 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:49.533723 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:49.751408 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:49.791836 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:50.034455 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:50.059568 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.145989892s)
	W1101 08:31:50.059653 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:50.059685 2316740 retry.go:31] will retry after 42.393662681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:50.250837 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:50.291945 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:50.534057 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:50.750482 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:50.791933 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:51.034194 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:51.250391 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:51.293306 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:51.533696 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:51.750538 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:51.791938 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:52.033798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:52.250286 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:52.292093 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:52.549499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:52.751044 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:52.793917 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:53.033494 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:53.250983 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:53.292317 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:53.536148 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:53.750514 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:53.791883 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:54.033491 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:54.250461 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:54.291544 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:54.541159 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:54.750428 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:54.792435 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:55.034338 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:55.250869 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:55.291913 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:55.533683 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:55.751031 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:55.792689 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:56.033510 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:56.250998 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:56.292345 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:56.534013 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:56.752224 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:56.792573 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:57.033262 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:57.250407 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:57.291506 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:57.534524 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:57.750728 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:57.851283 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:58.034951 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:58.251317 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:58.292608 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:58.540582 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:58.751171 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:58.792254 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:59.034292 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:59.251628 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:59.292875 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:59.533762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:59.751332 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:59.791367 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:00.045987 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:00.254350 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:00.298934 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:00.553156 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:00.753578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:00.794530 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:01.033422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:01.251752 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:01.291926 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:01.532971 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:01.750949 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:01.792228 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:02.036758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:02.251511 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:02.291593 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:02.534021 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:02.751457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:02.792421 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:03.034158 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:03.251762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:03.295009 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:03.534110 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:03.752507 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:03.797892 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:04.033827 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:04.250421 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:04.292150 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:04.534041 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:04.750503 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:04.791658 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:05.034398 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:05.249956 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:05.291634 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:05.534390 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:05.750202 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:05.795507 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:06.033633 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:06.250488 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:06.291937 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:06.533821 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:06.751354 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:06.792312 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:07.033129 2316740 kapi.go:107] duration metric: took 1m26.503330863s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:32:07.251052 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:07.292035 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:07.750233 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:07.791816 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:08.251490 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:08.291334 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:08.750709 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:08.791758 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:09.250670 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:09.291529 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:09.751252 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:09.792259 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:10.250604 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:10.291642 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:10.751069 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:10.791843 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:11.250838 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:11.291997 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:11.750140 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:11.791916 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:12.250041 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:12.291699 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:12.749931 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:12.792095 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:13.251200 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:13.292181 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:13.749760 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:13.791706 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:14.250099 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:14.291926 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:14.750150 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:14.792067 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:15.249844 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:15.291819 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:15.750708 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:15.791595 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:16.249515 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:16.291369 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:16.750742 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:16.791953 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:17.250234 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:17.292273 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:17.750798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:17.791464 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:18.250824 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:18.291734 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:18.750192 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:18.792103 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:19.250626 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:19.291303 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:19.750820 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:19.852615 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:20.250463 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:20.291377 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:20.750739 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:20.791782 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:21.250926 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:21.292337 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:21.750706 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:21.792025 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:22.251285 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:22.351617 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:22.750946 2316740 kapi.go:107] duration metric: took 1m39.003820912s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:32:22.752528 2316740 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377223 cluster.
	I1101 08:32:22.753464 2316740 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:32:22.754453 2316740 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:32:22.791998 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:23.292629 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:23.792100 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:24.292107 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:24.792104 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:25.291647 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:25.791784 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:26.292368 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:26.791490 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:27.292074 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:27.792167 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:28.292159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:28.801365 2316740 kapi.go:107] duration metric: took 1m49.012941762s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:32:32.453609 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:32:33.299536 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:32:33.299627 2316740 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:32:33.302827 2316740 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, storage-provisioner, default-storageclass, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1101 08:32:33.305514 2316740 addons.go:515] duration metric: took 1m59.571357205s for enable addons: enabled=[registry-creds amd-gpu-device-plugin storage-provisioner default-storageclass cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1101 08:32:33.305561 2316740 start.go:247] waiting for cluster config update ...
	I1101 08:32:33.305583 2316740 start.go:256] writing updated cluster config ...
	I1101 08:32:33.305885 2316740 ssh_runner.go:195] Run: rm -f paused
	I1101 08:32:33.310031 2316740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:33.314220 2316740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jfpff" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.319743 2316740 pod_ready.go:94] pod "coredns-66bc5c9577-jfpff" is "Ready"
	I1101 08:32:33.319768 2316740 pod_ready.go:86] duration metric: took 5.526508ms for pod "coredns-66bc5c9577-jfpff" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.322388 2316740 pod_ready.go:83] waiting for pod "etcd-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.327109 2316740 pod_ready.go:94] pod "etcd-addons-377223" is "Ready"
	I1101 08:32:33.327185 2316740 pod_ready.go:86] duration metric: took 4.769582ms for pod "etcd-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.329769 2316740 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.334540 2316740 pod_ready.go:94] pod "kube-apiserver-addons-377223" is "Ready"
	I1101 08:32:33.334621 2316740 pod_ready.go:86] duration metric: took 4.828476ms for pod "kube-apiserver-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.337098 2316740 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.714616 2316740 pod_ready.go:94] pod "kube-controller-manager-addons-377223" is "Ready"
	I1101 08:32:33.714685 2316740 pod_ready.go:86] duration metric: took 377.559313ms for pod "kube-controller-manager-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.913627 2316740 pod_ready.go:83] waiting for pod "kube-proxy-8p9ks" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.314347 2316740 pod_ready.go:94] pod "kube-proxy-8p9ks" is "Ready"
	I1101 08:32:34.314372 2316740 pod_ready.go:86] duration metric: took 400.720811ms for pod "kube-proxy-8p9ks" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.514318 2316740 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.914076 2316740 pod_ready.go:94] pod "kube-scheduler-addons-377223" is "Ready"
	I1101 08:32:34.914102 2316740 pod_ready.go:86] duration metric: took 399.705202ms for pod "kube-scheduler-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.914115 2316740 pod_ready.go:40] duration metric: took 1.604056909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:34.964978 2316740 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 08:32:34.971421 2316740 out.go:179] * Done! kubectl is now configured to use "addons-377223" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:35:43 addons-377223 crio[830]: time="2025-11-01T08:35:43.502729238Z" level=info msg="Removed container 83c14e434d46d22c0ee0463fe7db169916dbf246f3a8c8cd793bc183980ec861: kube-system/registry-creds-764b6fb674-jr4nd/registry-creds" id=4af4191d-c3e5-49ce-9f34-a08ac6e5fd99 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.057620598Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9pqmz/POD" id=7de9664f-b571-4945-8000-3521c6175eb0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.057688567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.073269251Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9pqmz Namespace:default ID:95505f5ac73da4d82f95703edb9528038e9dead64b0951852f91f9ea62a7c56e UID:e84ff615-dd79-461c-8517-5c84023e4a28 NetNS:/var/run/netns/10ce8dbc-cd55-4a5a-b3f3-f70d417efd9e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400282b610}] Aliases:map[]}"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.073311859Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9pqmz to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.083722707Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9pqmz Namespace:default ID:95505f5ac73da4d82f95703edb9528038e9dead64b0951852f91f9ea62a7c56e UID:e84ff615-dd79-461c-8517-5c84023e4a28 NetNS:/var/run/netns/10ce8dbc-cd55-4a5a-b3f3-f70d417efd9e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400282b610}] Aliases:map[]}"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.084137112Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9pqmz for CNI network kindnet (type=ptp)"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.087813673Z" level=info msg="Ran pod sandbox 95505f5ac73da4d82f95703edb9528038e9dead64b0951852f91f9ea62a7c56e with infra container: default/hello-world-app-5d498dc89-9pqmz/POD" id=7de9664f-b571-4945-8000-3521c6175eb0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.095263188Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ab0a4859-86aa-4181-b976-b665a85deda4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.095607301Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ab0a4859-86aa-4181-b976-b665a85deda4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.095685502Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ab0a4859-86aa-4181-b976-b665a85deda4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.098487472Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=a602dcc2-d72a-434f-9c97-b21e51c8fbaf name=/runtime.v1.ImageService/PullImage
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.100044801Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.74098985Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=a602dcc2-d72a-434f-9c97-b21e51c8fbaf name=/runtime.v1.ImageService/PullImage
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.741791156Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b73d6743-e54e-4253-bb1a-a74968642d55 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.745885281Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e02bec46-6748-488e-86f3-ef5651b07f9a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.754171432Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-9pqmz/hello-world-app" id=205acf46-50b5-4aa6-8fb5-a43a46d76470 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.754596732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.761569568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.761863975Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/30fa79f664f8b4706c1bee2376120eb24f3c2d3c733b673dc2dcd1c0c499563b/merged/etc/passwd: no such file or directory"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.761953581Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/30fa79f664f8b4706c1bee2376120eb24f3c2d3c733b673dc2dcd1c0c499563b/merged/etc/group: no such file or directory"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.762302642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.791361128Z" level=info msg="Created container dcabb571dcff269247d060e56950ce4ab1f84d7153a16f196f1a6dc84e381b1b: default/hello-world-app-5d498dc89-9pqmz/hello-world-app" id=205acf46-50b5-4aa6-8fb5-a43a46d76470 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.792800512Z" level=info msg="Starting container: dcabb571dcff269247d060e56950ce4ab1f84d7153a16f196f1a6dc84e381b1b" id=8fcd1327-7186-49fa-9b7e-cbd98b911b0d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:35:44 addons-377223 crio[830]: time="2025-11-01T08:35:44.799624675Z" level=info msg="Started container" PID=7314 containerID=dcabb571dcff269247d060e56950ce4ab1f84d7153a16f196f1a6dc84e381b1b description=default/hello-world-app-5d498dc89-9pqmz/hello-world-app id=8fcd1327-7186-49fa-9b7e-cbd98b911b0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=95505f5ac73da4d82f95703edb9528038e9dead64b0951852f91f9ea62a7c56e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	dcabb571dcff2       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   95505f5ac73da       hello-world-app-5d498dc89-9pqmz             default
	22b63ec361fd2       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           2                   44ee2132a592c       registry-creds-764b6fb674-jr4nd             kube-system
	a9deaf0b8d3e9       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   2af301844e7f4       nginx                                       default
	1075cdd73f4ae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   6d145ff483fdd       busybox                                     default
	04681fc01736e       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   c4eab738759ff       ingress-nginx-controller-675c5ddd98-rjv49   ingress-nginx
	13678f17060df       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   16e766bc2f8a4       gcp-auth-78565c9fb4-sf5ck                   gcp-auth
	414b5bc39c329       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	061ec86ab4df3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	1e44c8f5f77ec       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	21f927e7d6330       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   01570eefbef7b       ingress-nginx-admission-patch-4j6nj         ingress-nginx
	6308511f21c78       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	f27ab360ec078       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	eb2875a12fdc4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   e6b83ed4311c6       gadget-d7mfz                                gadget
	8b2ec503607da       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   28773f98ddd3a       ingress-nginx-admission-create-94rkz        ingress-nginx
	c1d7577e892ad       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                    kube-system
	a3686c57573f9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   09437cdc7b959       nvidia-device-plugin-daemonset-nh42v        kube-system
	a1310f21f82f3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   0cd4c987e9adc       yakd-dashboard-5ff678cb9-tvcmp              yakd-dashboard
	0603dc6c6335f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   ff3ea635e405f       snapshot-controller-7d9fbc56b8-xjs28        kube-system
	d0048c30bd262       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   7bab9d4fefec0       metrics-server-85b7d694d7-w9zzf             kube-system
	0881184118c48       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   380800881ce46       registry-proxy-ntzvs                        kube-system
	4f21a033f7625       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   a808be4c410fc       csi-hostpath-resizer-0                      kube-system
	5d0f635d3192a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   8dfccd45f6135       csi-hostpath-attacher-0                     kube-system
	8702acd353295       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   f9c98ac165c91       cloud-spanner-emulator-86bd5cbb97-jw2x4     default
	058fd3f4c2519       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   8034af4362de2       snapshot-controller-7d9fbc56b8-jjjfk        kube-system
	4a1acf727ae09       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   1b7e3092b591f       local-path-provisioner-648f6765c9-bsvzp     local-path-storage
	f4379003f8bbb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   c48abf0f71163       kube-ingress-dns-minikube                   kube-system
	8208bb01eece1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   dbd4bc3491894       registry-6b586f9694-hgg7l                   kube-system
	3c3aa06bb4ba0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   6fd4e04dbf2e2       coredns-66bc5c9577-jfpff                    kube-system
	b7a004a1dd4c8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   353b833ce8055       storage-provisioner                         kube-system
	07263ae55437d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   e67163b2b4436       kube-proxy-8p9ks                            kube-system
	5931a7ff4389c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   fc0b70acda8dd       kindnet-g288l                               kube-system
	fae02c07e9b59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   eb9c52a781516       etcd-addons-377223                          kube-system
	8a52242ff83bb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   57a10dbdcecf8       kube-scheduler-addons-377223                kube-system
	2567a3a7bafb7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   9f2906ce1eb6b       kube-apiserver-addons-377223                kube-system
	8b0193372487b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   aeb81ee14f38a       kube-controller-manager-addons-377223       kube-system
	
	
	==> coredns [3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481] <==
	[INFO] 10.244.0.6:59433 - 39859 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00221943s
	[INFO] 10.244.0.6:59433 - 46731 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000187991s
	[INFO] 10.244.0.6:59433 - 56515 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000210439s
	[INFO] 10.244.0.6:42079 - 43892 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140435s
	[INFO] 10.244.0.6:42079 - 43646 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175675s
	[INFO] 10.244.0.6:41186 - 41103 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119446s
	[INFO] 10.244.0.6:41186 - 40923 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151077s
	[INFO] 10.244.0.6:57431 - 4353 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134363s
	[INFO] 10.244.0.6:57431 - 4165 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078349s
	[INFO] 10.244.0.6:34764 - 35060 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00124231s
	[INFO] 10.244.0.6:34764 - 34623 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00123376s
	[INFO] 10.244.0.6:39071 - 62561 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138851s
	[INFO] 10.244.0.6:39071 - 62422 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073573s
	[INFO] 10.244.0.21:37428 - 33830 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142626s
	[INFO] 10.244.0.21:45720 - 55985 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000093174s
	[INFO] 10.244.0.21:57279 - 39455 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079603s
	[INFO] 10.244.0.21:44995 - 669 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076765s
	[INFO] 10.244.0.21:40690 - 16846 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085052s
	[INFO] 10.244.0.21:44949 - 9077 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075862s
	[INFO] 10.244.0.21:56203 - 10615 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002281927s
	[INFO] 10.244.0.21:45961 - 54463 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001896625s
	[INFO] 10.244.0.21:58462 - 13847 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0005637s
	[INFO] 10.244.0.21:54563 - 55058 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001455948s
	[INFO] 10.244.0.23:59358 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000209774s
	[INFO] 10.244.0.23:34632 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017666s
	
	
	==> describe nodes <==
	Name:               addons-377223
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=addons-377223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_30_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377223
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377223"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377223
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:35:34 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:35:34 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:35:34 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:35:34 +0000   Sat, 01 Nov 2025 08:31:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0e5ca097-7497-4b5a-acf6-0c7438d075b8
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     cloud-spanner-emulator-86bd5cbb97-jw2x4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  default                     hello-world-app-5d498dc89-9pqmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-d7mfz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  gcp-auth                    gcp-auth-78565c9fb4-sf5ck                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-rjv49    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m6s
	  kube-system                 coredns-66bc5c9577-jfpff                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 csi-hostpathplugin-9rxph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 etcd-addons-377223                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m17s
	  kube-system                 kindnet-g288l                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m12s
	  kube-system                 kube-apiserver-addons-377223                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-addons-377223        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-8p9ks                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-addons-377223                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 metrics-server-85b7d694d7-w9zzf              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m7s
	  kube-system                 nvidia-device-plugin-daemonset-nh42v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 registry-6b586f9694-hgg7l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 registry-creds-764b6fb674-jr4nd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 registry-proxy-ntzvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 snapshot-controller-7d9fbc56b8-jjjfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-xjs28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  local-path-storage          local-path-provisioner-648f6765c9-bsvzp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tvcmp               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m10s  kube-proxy       
	  Normal   Starting                 5m17s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m17s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m17s  kubelet          Node addons-377223 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s  kubelet          Node addons-377223 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s  kubelet          Node addons-377223 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m13s  node-controller  Node addons-377223 event: Registered Node addons-377223 in Controller
	  Normal   NodeReady                4m30s  kubelet          Node addons-377223 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:04] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:08] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:09] overlayfs: idmapped layers are currently not supported
	[ +41.926823] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:10] overlayfs: idmapped layers are currently not supported
	[ +39.688208] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:13] overlayfs: idmapped layers are currently not supported
	[ +17.643407] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:15] overlayfs: idmapped layers are currently not supported
	[ +15.590074] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:16] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:17] overlayfs: idmapped layers are currently not supported
	[ +25.755276] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:18] overlayfs: idmapped layers are currently not supported
	[  +9.757193] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:23] overlayfs: idmapped layers are currently not supported
	[  +4.855106] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 08:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7] <==
	{"level":"warn","ts":"2025-11-01T08:30:24.369770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.387794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.436684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.442384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.458756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.474573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.505297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.509285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.531422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.544859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.564625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.576560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.597307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.608507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.629642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.658253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.672613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.697565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.767469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:40.744875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:40.760941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.701978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.722923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.742759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.760376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [13678f17060dfe64038fa4572b67317d5cf2008f763b3e94c774fa0d7c88d0b2] <==
	2025/11/01 08:32:22 GCP Auth Webhook started!
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:55 Ready to marshal response ...
	2025/11/01 08:32:55 Ready to write response ...
	2025/11/01 08:32:57 Ready to marshal response ...
	2025/11/01 08:32:57 Ready to write response ...
	2025/11/01 08:32:57 Ready to marshal response ...
	2025/11/01 08:32:57 Ready to write response ...
	2025/11/01 08:33:06 Ready to marshal response ...
	2025/11/01 08:33:06 Ready to write response ...
	2025/11/01 08:33:08 Ready to marshal response ...
	2025/11/01 08:33:08 Ready to write response ...
	2025/11/01 08:33:22 Ready to marshal response ...
	2025/11/01 08:33:22 Ready to write response ...
	2025/11/01 08:33:33 Ready to marshal response ...
	2025/11/01 08:33:33 Ready to write response ...
	2025/11/01 08:35:43 Ready to marshal response ...
	2025/11/01 08:35:43 Ready to write response ...
	
	
	==> kernel <==
	 08:35:45 up 17:18,  0 user,  load average: 0.94, 1.35, 2.11
	Linux addons-377223 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea] <==
	I1101 08:33:44.756742       1 main.go:301] handling current node
	I1101 08:33:54.754752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:54.754789       1 main.go:301] handling current node
	I1101 08:34:04.754169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:04.754202       1 main.go:301] handling current node
	I1101 08:34:14.757130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:14.757164       1 main.go:301] handling current node
	I1101 08:34:24.761171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:24.761203       1 main.go:301] handling current node
	I1101 08:34:34.760187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:34.760294       1 main.go:301] handling current node
	I1101 08:34:44.754880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:44.754912       1 main.go:301] handling current node
	I1101 08:34:54.755137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:54.755166       1 main.go:301] handling current node
	I1101 08:35:04.762233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:35:04.762268       1 main.go:301] handling current node
	I1101 08:35:14.754191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:35:14.754221       1 main.go:301] handling current node
	I1101 08:35:24.762040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:35:24.762140       1 main.go:301] handling current node
	I1101 08:35:34.754493       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:35:34.755494       1 main.go:301] handling current node
	I1101 08:35:44.754359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:35:44.754395       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0] <==
	E1101 08:31:15.264307       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.43.129:443: connect: connection refused" logger="UnhandledError"
	W1101 08:31:38.794656       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:38.794692       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:31:38.794705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:31:38.795871       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:38.795952       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:31:38.795962       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:31:52.502840       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:52.502913       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:31:52.503091       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.124.221:443: connect: connection refused" logger="UnhandledError"
	E1101 08:31:52.504475       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.124.221:443: connect: connection refused" logger="UnhandledError"
	I1101 08:31:52.582159       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:32:45.236744       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52080: use of closed network connection
	E1101 08:32:45.415580       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52092: use of closed network connection
	I1101 08:33:20.640759       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 08:33:22.646978       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:33:22.951679       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.102.6"}
	E1101 08:33:41.337378       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1101 08:35:43.866593       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.172.126"}
	
	
	==> kube-controller-manager [8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89] <==
	I1101 08:30:32.725456       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-377223"
	I1101 08:30:32.725567       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 08:30:32.725618       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 08:30:32.725943       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 08:30:32.726222       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 08:30:32.726409       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 08:30:32.726639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 08:30:32.727514       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 08:30:32.727715       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 08:30:32.728749       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:30:32.731033       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 08:30:32.731130       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 08:30:32.733509       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 08:30:32.744486       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	E1101 08:30:38.000977       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 08:31:02.695063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:31:02.695219       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:31:02.695282       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:31:02.713645       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:31:02.718340       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:31:02.796281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:31:02.819932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:31:17.732363       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 08:31:32.801509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:31:32.829206       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0] <==
	I1101 08:30:34.621317       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:30:34.709737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:30:34.810560       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:30:34.810590       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:30:34.810664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:30:34.868286       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:30:34.868340       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:30:34.875282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:30:34.875581       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:30:34.875595       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:34.877463       1 config.go:200] "Starting service config controller"
	I1101 08:30:34.877473       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:30:34.877505       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:30:34.877509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:30:34.877519       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:30:34.877523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:30:34.878115       1 config.go:309] "Starting node config controller"
	I1101 08:30:34.878122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:30:34.878138       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:30:34.983596       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:30:34.983664       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:30:34.983919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2] <==
	I1101 08:30:26.818023       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:26.820046       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:30:26.820085       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:30:26.820847       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 08:30:26.820924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 08:30:26.826268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:30:26.826443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 08:30:26.829285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:30:26.829423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:30:26.829739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:30:26.830267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:30:26.834194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:30:26.834359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:30:26.834425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:30:26.834532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:30:26.834589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:30:26.834621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:30:26.834656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:30:26.834703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:30:26.834731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:30:26.834765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:30:26.834866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:30:26.834899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:30:26.834949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 08:30:27.920518       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:34:28 addons-377223 kubelet[1283]: E1101 08:34:28.226410    1283 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/058ad117b9111cd836bdcf2d8045f0417f2d9705374fedf988d0004278a54138/diff" to get inode usage: stat /var/lib/containers/storage/overlay/058ad117b9111cd836bdcf2d8045f0417f2d9705374fedf988d0004278a54138/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 08:34:28 addons-377223 kubelet[1283]: E1101 08:34:28.226787    1283 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9142d0e92b5edf27285d2314b17ce66d497c0d343c1418323f5f3d51e9e8f6a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9142d0e92b5edf27285d2314b17ce66d497c0d343c1418323f5f3d51e9e8f6a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 08:34:38 addons-377223 kubelet[1283]: I1101 08:34:38.071126    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-nh42v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:18 addons-377223 kubelet[1283]: I1101 08:35:18.070622    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-hgg7l" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:25 addons-377223 kubelet[1283]: I1101 08:35:25.369351    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:25 addons-377223 kubelet[1283]: W1101 08:35:25.395242    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/crio-44ee2132a592cb6be1eeee80721f89bdfe49619c755f337a6cf5847049640144 WatchSource:0}: Error finding container 44ee2132a592cb6be1eeee80721f89bdfe49619c755f337a6cf5847049640144: Status 404 returned error can't find the container with id 44ee2132a592cb6be1eeee80721f89bdfe49619c755f337a6cf5847049640144
	Nov 01 08:35:27 addons-377223 kubelet[1283]: I1101 08:35:27.398401    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:27 addons-377223 kubelet[1283]: I1101 08:35:27.398462    1283 scope.go:117] "RemoveContainer" containerID="83e7222d401382d728222b4f5c5b6f46ddccc61677d8b28ee9743e1c2b860880"
	Nov 01 08:35:28 addons-377223 kubelet[1283]: I1101 08:35:28.070263    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ntzvs" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:28 addons-377223 kubelet[1283]: E1101 08:35:28.229089    1283 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/52387c58ba1df81ebfc4e459275b31881bca50d71f59985b1f1815e132074b44/diff" to get inode usage: stat /var/lib/containers/storage/overlay/52387c58ba1df81ebfc4e459275b31881bca50d71f59985b1f1815e132074b44/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 08:35:28 addons-377223 kubelet[1283]: I1101 08:35:28.405293    1283 scope.go:117] "RemoveContainer" containerID="83e7222d401382d728222b4f5c5b6f46ddccc61677d8b28ee9743e1c2b860880"
	Nov 01 08:35:28 addons-377223 kubelet[1283]: I1101 08:35:28.405699    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:28 addons-377223 kubelet[1283]: I1101 08:35:28.405749    1283 scope.go:117] "RemoveContainer" containerID="83c14e434d46d22c0ee0463fe7db169916dbf246f3a8c8cd793bc183980ec861"
	Nov 01 08:35:28 addons-377223 kubelet[1283]: E1101 08:35:28.405920    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jr4nd_kube-system(2162454c-4ead-4a3a-aeb4-e07bbd81c04c)\"" pod="kube-system/registry-creds-764b6fb674-jr4nd" podUID="2162454c-4ead-4a3a-aeb4-e07bbd81c04c"
	Nov 01 08:35:29 addons-377223 kubelet[1283]: I1101 08:35:29.410263    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:29 addons-377223 kubelet[1283]: I1101 08:35:29.410319    1283 scope.go:117] "RemoveContainer" containerID="83c14e434d46d22c0ee0463fe7db169916dbf246f3a8c8cd793bc183980ec861"
	Nov 01 08:35:29 addons-377223 kubelet[1283]: E1101 08:35:29.410458    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jr4nd_kube-system(2162454c-4ead-4a3a-aeb4-e07bbd81c04c)\"" pod="kube-system/registry-creds-764b6fb674-jr4nd" podUID="2162454c-4ead-4a3a-aeb4-e07bbd81c04c"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.068373    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.068452    1283 scope.go:117] "RemoveContainer" containerID="83c14e434d46d22c0ee0463fe7db169916dbf246f3a8c8cd793bc183980ec861"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.470686    1283 scope.go:117] "RemoveContainer" containerID="83c14e434d46d22c0ee0463fe7db169916dbf246f3a8c8cd793bc183980ec861"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.470938    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jr4nd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.471113    1283 scope.go:117] "RemoveContainer" containerID="22b63ec361fd242eeed5e1dc11d28f61e5ae70323942c11627841eca735ba0e2"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: E1101 08:35:43.471303    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jr4nd_kube-system(2162454c-4ead-4a3a-aeb4-e07bbd81c04c)\"" pod="kube-system/registry-creds-764b6fb674-jr4nd" podUID="2162454c-4ead-4a3a-aeb4-e07bbd81c04c"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.857854    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8zh\" (UniqueName: \"kubernetes.io/projected/e84ff615-dd79-461c-8517-5c84023e4a28-kube-api-access-5m8zh\") pod \"hello-world-app-5d498dc89-9pqmz\" (UID: \"e84ff615-dd79-461c-8517-5c84023e4a28\") " pod="default/hello-world-app-5d498dc89-9pqmz"
	Nov 01 08:35:43 addons-377223 kubelet[1283]: I1101 08:35:43.858132    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e84ff615-dd79-461c-8517-5c84023e4a28-gcp-creds\") pod \"hello-world-app-5d498dc89-9pqmz\" (UID: \"e84ff615-dd79-461c-8517-5c84023e4a28\") " pod="default/hello-world-app-5d498dc89-9pqmz"
	
	
	==> storage-provisioner [b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69] <==
	W1101 08:35:21.344377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:23.347447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:23.354308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:25.357092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:25.361567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:27.365262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:27.369703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:29.372673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:29.376931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:31.379457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:31.385933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:33.389388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:33.395554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:35.398231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:35.402541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:37.405763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:37.412100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:39.415446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:39.421909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:41.424289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:41.428654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:43.431062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:43.435433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:45.441930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:45.450388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377223 -n addons-377223
helpers_test.go:269: (dbg) Run:  kubectl --context addons-377223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-377223 describe pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-377223 describe pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj: exit status 1 (99.077597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-94rkz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4j6nj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-377223 describe pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (301.876505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:35:47.103204 2326359 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:35:47.104919 2326359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:35:47.104940 2326359 out.go:374] Setting ErrFile to fd 2...
	I1101 08:35:47.104946 2326359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:35:47.105287 2326359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:35:47.105647 2326359 mustload.go:66] Loading cluster: addons-377223
	I1101 08:35:47.106149 2326359 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:35:47.106194 2326359 addons.go:607] checking whether the cluster is paused
	I1101 08:35:47.106352 2326359 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:35:47.106387 2326359 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:35:47.106963 2326359 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:35:47.124619 2326359 ssh_runner.go:195] Run: systemctl --version
	I1101 08:35:47.124679 2326359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:35:47.147616 2326359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:35:47.258309 2326359 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:35:47.258402 2326359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:35:47.287123 2326359 cri.go:89] found id: "22b63ec361fd242eeed5e1dc11d28f61e5ae70323942c11627841eca735ba0e2"
	I1101 08:35:47.287148 2326359 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:35:47.287153 2326359 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:35:47.287157 2326359 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:35:47.287161 2326359 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:35:47.287165 2326359 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:35:47.287170 2326359 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:35:47.287177 2326359 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:35:47.287181 2326359 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:35:47.287187 2326359 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:35:47.287190 2326359 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:35:47.287193 2326359 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:35:47.287197 2326359 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:35:47.287200 2326359 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:35:47.287203 2326359 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:35:47.287209 2326359 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:35:47.287219 2326359 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:35:47.287223 2326359 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:35:47.287227 2326359 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:35:47.287229 2326359 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:35:47.287234 2326359 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:35:47.287240 2326359 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:35:47.287243 2326359 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:35:47.287247 2326359 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:35:47.287250 2326359 cri.go:89] found id: ""
	I1101 08:35:47.287302 2326359 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:35:47.302230 2326359 out.go:203] 
	W1101 08:35:47.305098 2326359 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:35:47.305125 2326359 out.go:285] * 
	* 
	W1101 08:35:47.318711 2326359 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:35:47.321872 2326359 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable ingress --alsologtostderr -v=1: exit status 11 (277.531333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:35:47.381110 2326468 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:35:47.383960 2326468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:35:47.384012 2326468 out.go:374] Setting ErrFile to fd 2...
	I1101 08:35:47.384033 2326468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:35:47.384539 2326468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:35:47.384908 2326468 mustload.go:66] Loading cluster: addons-377223
	I1101 08:35:47.385342 2326468 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:35:47.385393 2326468 addons.go:607] checking whether the cluster is paused
	I1101 08:35:47.385548 2326468 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:35:47.385590 2326468 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:35:47.386093 2326468 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:35:47.402957 2326468 ssh_runner.go:195] Run: systemctl --version
	I1101 08:35:47.403017 2326468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:35:47.420405 2326468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:35:47.526472 2326468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:35:47.526562 2326468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:35:47.559508 2326468 cri.go:89] found id: "22b63ec361fd242eeed5e1dc11d28f61e5ae70323942c11627841eca735ba0e2"
	I1101 08:35:47.559527 2326468 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:35:47.559532 2326468 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:35:47.559536 2326468 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:35:47.559540 2326468 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:35:47.559543 2326468 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:35:47.559546 2326468 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:35:47.559549 2326468 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:35:47.559552 2326468 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:35:47.559558 2326468 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:35:47.559561 2326468 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:35:47.559564 2326468 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:35:47.559567 2326468 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:35:47.559571 2326468 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:35:47.559574 2326468 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:35:47.559581 2326468 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:35:47.559588 2326468 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:35:47.559592 2326468 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:35:47.559595 2326468 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:35:47.559598 2326468 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:35:47.559601 2326468 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:35:47.559605 2326468 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:35:47.559608 2326468 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:35:47.559612 2326468 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:35:47.559626 2326468 cri.go:89] found id: ""
	I1101 08:35:47.559679 2326468 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:35:47.580763 2326468 out.go:203] 
	W1101 08:35:47.583600 2326468 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:35:47.583620 2326468 out.go:285] * 
	* 
	W1101 08:35:47.595142 2326468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:35:47.598154 2326468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-d7mfz" [5fb7e2cd-0205-4741-8849-4cc2bac187e4] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005999672s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (267.552952ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:22.124045 2324458 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:22.125446 2324458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:22.125461 2324458 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:22.125466 2324458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:22.125721 2324458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:22.126003 2324458 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:22.126356 2324458 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:22.126382 2324458 addons.go:607] checking whether the cluster is paused
	I1101 08:33:22.126491 2324458 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:22.126514 2324458 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:22.127006 2324458 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:22.143930 2324458 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:22.143985 2324458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:22.162464 2324458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:22.266304 2324458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:22.266383 2324458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:22.296676 2324458 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:22.296694 2324458 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:22.296698 2324458 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:22.296702 2324458 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:22.296705 2324458 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:22.296708 2324458 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:22.296711 2324458 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:22.296714 2324458 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:22.296717 2324458 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:22.296723 2324458 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:22.296726 2324458 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:22.296730 2324458 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:22.296733 2324458 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:22.296736 2324458 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:22.296739 2324458 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:22.296747 2324458 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:22.296750 2324458 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:22.296754 2324458 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:22.296757 2324458 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:22.296761 2324458 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:22.296765 2324458 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:22.296768 2324458 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:22.296771 2324458 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:22.296774 2324458 cri.go:89] found id: ""
	I1101 08:33:22.296821 2324458 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:22.311620 2324458 out.go:203] 
	W1101 08:33:22.314682 2324458 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:22.314706 2324458 out.go:285] * 
	* 
	W1101 08:33:22.326247 2324458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:22.329428 2324458 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.302138ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002904058s
addons_test.go:463: (dbg) Run:  kubectl --context addons-377223 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (269.574386ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:16.843546 2324357 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:16.844978 2324357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:16.845023 2324357 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:16.845044 2324357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:16.845317 2324357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:16.845648 2324357 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:16.846048 2324357 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:16.846101 2324357 addons.go:607] checking whether the cluster is paused
	I1101 08:33:16.846227 2324357 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:16.846272 2324357 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:16.846736 2324357 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:16.863398 2324357 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:16.863446 2324357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:16.883055 2324357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:16.990401 2324357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:16.990497 2324357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:17.021694 2324357 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:17.021717 2324357 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:17.021722 2324357 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:17.021726 2324357 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:17.021729 2324357 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:17.021733 2324357 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:17.021736 2324357 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:17.021739 2324357 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:17.021742 2324357 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:17.021748 2324357 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:17.021752 2324357 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:17.021755 2324357 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:17.021758 2324357 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:17.021762 2324357 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:17.021764 2324357 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:17.021772 2324357 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:17.021779 2324357 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:17.021787 2324357 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:17.021790 2324357 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:17.021793 2324357 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:17.021797 2324357 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:17.021803 2324357 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:17.021807 2324357 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:17.021810 2324357 cri.go:89] found id: ""
	I1101 08:33:17.021860 2324357 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:17.036636 2324357 out.go:203] 
	W1101 08:33:17.039520 2324357 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:17.039545 2324357 out.go:285] * 
	* 
	W1101 08:33:17.052023 2324357 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:17.055129 2324357 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 08:33:06.940263 2315982 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 08:33:06.944199 2315982 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:33:06.944230 2315982 kapi.go:107] duration metric: took 3.976449ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.987517ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-377223 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-377223 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [5f1a8f82-db98-482b-b1fd-9b6614922fa9] Pending
helpers_test.go:352: "task-pv-pod" [5f1a8f82-db98-482b-b1fd-9b6614922fa9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [5f1a8f82-db98-482b-b1fd-9b6614922fa9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003657393s
addons_test.go:572: (dbg) Run:  kubectl --context addons-377223 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377223 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377223 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-377223 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-377223 delete pod task-pv-pod: (1.246444742s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-377223 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-377223 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-377223 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b4cceed5-f3c8-47ea-803e-6ac475301895] Pending
helpers_test.go:352: "task-pv-pod-restore" [b4cceed5-f3c8-47ea-803e-6ac475301895] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b4cceed5-f3c8-47ea-803e-6ac475301895] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003666951s
addons_test.go:614: (dbg) Run:  kubectl --context addons-377223 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-377223 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-377223 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (261.742504ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:41.766313 2325139 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:41.767757 2325139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:41.767773 2325139 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:41.767779 2325139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:41.768103 2325139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:41.768392 2325139 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:41.768750 2325139 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:41.768775 2325139 addons.go:607] checking whether the cluster is paused
	I1101 08:33:41.768874 2325139 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:41.768897 2325139 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:41.769333 2325139 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:41.791720 2325139 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:41.791788 2325139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:41.808893 2325139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:41.914066 2325139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:41.914154 2325139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:41.946474 2325139 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:41.946493 2325139 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:41.946498 2325139 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:41.946502 2325139 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:41.946505 2325139 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:41.946509 2325139 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:41.946513 2325139 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:41.946516 2325139 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:41.946519 2325139 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:41.946525 2325139 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:41.946528 2325139 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:41.946532 2325139 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:41.946535 2325139 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:41.946538 2325139 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:41.946542 2325139 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:41.946546 2325139 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:41.946549 2325139 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:41.946555 2325139 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:41.946558 2325139 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:41.946561 2325139 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:41.946566 2325139 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:41.946576 2325139 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:41.946581 2325139 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:41.946585 2325139 cri.go:89] found id: ""
	I1101 08:33:41.946647 2325139 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:41.961891 2325139 out.go:203] 
	W1101 08:33:41.964714 2325139 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:41.964737 2325139 out.go:285] * 
	* 
	W1101 08:33:41.976505 2325139 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:41.979500 2325139 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (325.524935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:42.046727 2325182 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:42.048201 2325182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:42.048256 2325182 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:42.048278 2325182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:42.048674 2325182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:42.049034 2325182 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:42.049492 2325182 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:42.049547 2325182 addons.go:607] checking whether the cluster is paused
	I1101 08:33:42.049684 2325182 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:42.049728 2325182 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:42.050244 2325182 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:42.084188 2325182 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:42.084264 2325182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:42.112712 2325182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:42.228721 2325182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:42.228888 2325182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:42.266947 2325182 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:42.266981 2325182 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:42.266987 2325182 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:42.266993 2325182 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:42.266997 2325182 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:42.267004 2325182 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:42.267007 2325182 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:42.267011 2325182 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:42.267015 2325182 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:42.267023 2325182 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:42.267027 2325182 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:42.267031 2325182 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:42.267035 2325182 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:42.267038 2325182 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:42.267041 2325182 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:42.267048 2325182 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:42.267057 2325182 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:42.267063 2325182 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:42.267067 2325182 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:42.267071 2325182 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:42.267077 2325182 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:42.267081 2325182 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:42.267084 2325182 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:42.267087 2325182 cri.go:89] found id: ""
	I1101 08:33:42.267144 2325182 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:42.285719 2325182 out.go:203] 
	W1101 08:33:42.288648 2325182 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:42.288678 2325182 out.go:285] * 
	* 
	W1101 08:33:42.301218 2325182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:42.304396 2325182 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (35.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (4.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-377223 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-377223 --alsologtostderr -v=1: exit status 11 (376.078002ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:06.723471 2323580 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:06.725296 2323580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.725349 2323580 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:06.725379 2323580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.725772 2323580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:06.726232 2323580 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:06.726750 2323580 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.726813 2323580 addons.go:607] checking whether the cluster is paused
	I1101 08:33:06.726986 2323580 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.727044 2323580 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:06.727606 2323580 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:06.749458 2323580 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:06.749508 2323580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:06.771970 2323580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:06.889324 2323580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:06.889450 2323580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:06.928126 2323580 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:06.928148 2323580 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:06.928154 2323580 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:06.928158 2323580 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:06.928167 2323580 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:06.928171 2323580 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:06.928174 2323580 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:06.928178 2323580 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:06.928181 2323580 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:06.928187 2323580 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:06.928190 2323580 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:06.928193 2323580 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:06.928197 2323580 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:06.928201 2323580 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:06.928204 2323580 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:06.928215 2323580 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:06.928218 2323580 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:06.928223 2323580 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:06.928226 2323580 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:06.928229 2323580 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:06.928234 2323580 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:06.928237 2323580 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:06.928240 2323580 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:06.928243 2323580 cri.go:89] found id: ""
	I1101 08:33:06.928298 2323580 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:06.953734 2323580 out.go:203] 
	W1101 08:33:06.956529 2323580 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:06.956555 2323580 out.go:285] * 
	* 
	W1101 08:33:06.968880 2323580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:06.974018 2323580 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-377223 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-377223
helpers_test.go:243: (dbg) docker inspect addons-377223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c",
	        "Created": "2025-11-01T08:30:01.345784179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2317129,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:30:01.425079886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/hosts",
	        "LogPath": "/var/lib/docker/containers/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c-json.log",
	        "Name": "/addons-377223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-377223:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c",
	                "LowerDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2e642e433ff80c15a157f6ff17b27c31b901009c25caa735bd2b0753db4c7bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-377223",
	                "Source": "/var/lib/docker/volumes/addons-377223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377223",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377223",
	                "name.minikube.sigs.k8s.io": "addons-377223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d458471456f387032f6e83ec4e978b2230ee0d641d45ecd31b07e88643dee31e",
	            "SandboxKey": "/var/run/docker/netns/d458471456f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:94:0b:1f:b5:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "936d16469801a3951dccf33a5a4c1dd7e8742e643175eea2b5578e8fdc28e87b",
	                    "EndpointID": "b4945a2467466221b3ab51efdaf28cf4eb7a0f66dfc7c73a7bcf086a9645db0c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377223",
	                        "6884fdaa9d12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377223 -n addons-377223
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-377223 logs -n 25: (1.773986272s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-778815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-778815   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-778815                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-778815   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-607531 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-607531   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-607531                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-607531   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-778815                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-778815   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-607531                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-607531   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-849797 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-849797 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-849797                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-849797 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-203275 --alsologtostderr --binary-mirror http://127.0.0.1:39087 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-203275   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-203275                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-203275   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-377223                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-377223                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-377223 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-377223 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-377223 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-377223 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-377223 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ ssh     │ addons-377223 ssh cat /opt/local-path-provisioner/pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-377223 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ addons-377223 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	│ addons  │ enable headlamp -p addons-377223 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377223          │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:36.109928 2316740 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:36.110062 2316740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:36.110072 2316740 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:36.110077 2316740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:36.110322 2316740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:29:36.110768 2316740 out.go:368] Setting JSON to false
	I1101 08:29:36.111624 2316740 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":61922,"bootTime":1761923854,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:29:36.111695 2316740 start.go:143] virtualization:  
	I1101 08:29:36.115791 2316740 out.go:179] * [addons-377223] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:29:36.118478 2316740 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:29:36.118563 2316740 notify.go:221] Checking for updates...
	I1101 08:29:36.123863 2316740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:36.126257 2316740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:29:36.128742 2316740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:29:36.131841 2316740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:29:36.134391 2316740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:29:36.137221 2316740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:36.162701 2316740 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:36.162854 2316740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:36.221359 2316740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 08:29:36.212539631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:36.221462 2316740 docker.go:319] overlay module found
	I1101 08:29:36.224272 2316740 out.go:179] * Using the docker driver based on user configuration
	I1101 08:29:36.226741 2316740 start.go:309] selected driver: docker
	I1101 08:29:36.226758 2316740 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:36.226771 2316740 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:29:36.227508 2316740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:36.295772 2316740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 08:29:36.286686201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:36.295977 2316740 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:36.296225 2316740 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:29:36.299006 2316740 out.go:179] * Using Docker driver with root privileges
	I1101 08:29:36.301728 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:29:36.301792 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:36.301804 2316740 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:36.301880 2316740 start.go:353] cluster config:
	{Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:29:36.304874 2316740 out.go:179] * Starting "addons-377223" primary control-plane node in "addons-377223" cluster
	I1101 08:29:36.307700 2316740 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:29:36.311183 2316740 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:36.313741 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:36.313801 2316740 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 08:29:36.313815 2316740 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:36.313923 2316740 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 08:29:36.313938 2316740 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:29:36.314283 2316740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json ...
	I1101 08:29:36.314311 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json: {Name:mk707a5761aa06a3feb48f1bb35d185f16273e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:36.314478 2316740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:36.329749 2316740 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:36.329895 2316740 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:36.329928 2316740 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:29:36.329937 2316740 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:29:36.329944 2316740 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:29:36.329949 2316740 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:29:53.891088 2316740 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:29:53.891131 2316740 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:29:53.891174 2316740 start.go:360] acquireMachinesLock for addons-377223: {Name:mk565622d540197422d5be45c5a825dc2f42c6dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:53.891293 2316740 start.go:364] duration metric: took 94.536µs to acquireMachinesLock for "addons-377223"
	I1101 08:29:53.891343 2316740 start.go:93] Provisioning new machine with config: &{Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:29:53.891412 2316740 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:29:53.894809 2316740 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:29:53.895048 2316740 start.go:159] libmachine.API.Create for "addons-377223" (driver="docker")
	I1101 08:29:53.895087 2316740 client.go:173] LocalClient.Create starting
	I1101 08:29:53.895211 2316740 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 08:29:54.139129 2316740 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 08:29:54.706440 2316740 cli_runner.go:164] Run: docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:29:54.722711 2316740 cli_runner.go:211] docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:29:54.722796 2316740 network_create.go:284] running [docker network inspect addons-377223] to gather additional debugging logs...
	I1101 08:29:54.722816 2316740 cli_runner.go:164] Run: docker network inspect addons-377223
	W1101 08:29:54.737683 2316740 cli_runner.go:211] docker network inspect addons-377223 returned with exit code 1
	I1101 08:29:54.737715 2316740 network_create.go:287] error running [docker network inspect addons-377223]: docker network inspect addons-377223: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377223 not found
	I1101 08:29:54.737739 2316740 network_create.go:289] output of [docker network inspect addons-377223]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377223 not found
	
	** /stderr **
	I1101 08:29:54.737840 2316740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:54.753822 2316740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b643c0}
	I1101 08:29:54.753868 2316740 network_create.go:124] attempt to create docker network addons-377223 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:29:54.753926 2316740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377223 addons-377223
	I1101 08:29:54.811609 2316740 network_create.go:108] docker network addons-377223 192.168.49.0/24 created
	I1101 08:29:54.811641 2316740 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377223" container
	I1101 08:29:54.811730 2316740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:29:54.827385 2316740 cli_runner.go:164] Run: docker volume create addons-377223 --label name.minikube.sigs.k8s.io=addons-377223 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:29:54.844623 2316740 oci.go:103] Successfully created a docker volume addons-377223
	I1101 08:29:54.844712 2316740 cli_runner.go:164] Run: docker run --rm --name addons-377223-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --entrypoint /usr/bin/test -v addons-377223:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:29:56.882611 2316740 cli_runner.go:217] Completed: docker run --rm --name addons-377223-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --entrypoint /usr/bin/test -v addons-377223:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.037859682s)
	I1101 08:29:56.882642 2316740 oci.go:107] Successfully prepared a docker volume addons-377223
	I1101 08:29:56.882681 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:56.882701 2316740 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:29:56.882758 2316740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:30:01.247442 2316740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.364647165s)
	I1101 08:30:01.247476 2316740 kic.go:203] duration metric: took 4.364771429s to extract preloaded images to volume ...
	W1101 08:30:01.247637 2316740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 08:30:01.247743 2316740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:30:01.327324 2316740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377223 --name addons-377223 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377223 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377223 --network addons-377223 --ip 192.168.49.2 --volume addons-377223:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:30:01.675552 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Running}}
	I1101 08:30:01.713691 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:01.736219 2316740 cli_runner.go:164] Run: docker exec addons-377223 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:30:01.794391 2316740 oci.go:144] the created container "addons-377223" has a running status.
	I1101 08:30:01.794420 2316740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa...
	I1101 08:30:01.907364 2316740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:30:01.936719 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:01.976148 2316740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:30:01.976174 2316740 kic_runner.go:114] Args: [docker exec --privileged addons-377223 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:30:02.064436 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:02.097797 2316740 machine.go:94] provisionDockerMachine start ...
	I1101 08:30:02.097927 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:02.130778 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:02.131138 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:02.131152 2316740 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:30:02.133185 2316740 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 08:30:05.283351 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377223
	
	I1101 08:30:05.283384 2316740 ubuntu.go:182] provisioning hostname "addons-377223"
	I1101 08:30:05.283446 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:05.300293 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:05.300606 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:05.300621 2316740 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377223 && echo "addons-377223" | sudo tee /etc/hostname
	I1101 08:30:05.456481 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377223
	
	I1101 08:30:05.456608 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:05.474428 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:05.474744 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:05.474765 2316740 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377223/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:30:05.619700 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:30:05.619726 2316740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 08:30:05.619747 2316740 ubuntu.go:190] setting up certificates
	I1101 08:30:05.619756 2316740 provision.go:84] configureAuth start
	I1101 08:30:05.619815 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:05.636457 2316740 provision.go:143] copyHostCerts
	I1101 08:30:05.636535 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 08:30:05.636665 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 08:30:05.636730 2316740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 08:30:05.636782 2316740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.addons-377223 san=[127.0.0.1 192.168.49.2 addons-377223 localhost minikube]
	I1101 08:30:06.119766 2316740 provision.go:177] copyRemoteCerts
	I1101 08:30:06.119834 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:30:06.119894 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.136805 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.238924 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 08:30:06.255259 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:30:06.271607 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:30:06.287933 2316740 provision.go:87] duration metric: took 668.068135ms to configureAuth
	I1101 08:30:06.287959 2316740 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:30:06.288184 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:06.288302 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.305804 2316740 main.go:143] libmachine: Using SSH client type: native
	I1101 08:30:06.306108 2316740 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36055 <nil> <nil>}
	I1101 08:30:06.306128 2316740 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:30:06.554710 2316740 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:30:06.554734 2316740 machine.go:97] duration metric: took 4.456914687s to provisionDockerMachine
	I1101 08:30:06.554742 2316740 client.go:176] duration metric: took 12.65964649s to LocalClient.Create
	I1101 08:30:06.554758 2316740 start.go:167] duration metric: took 12.659708199s to libmachine.API.Create "addons-377223"
	I1101 08:30:06.554765 2316740 start.go:293] postStartSetup for "addons-377223" (driver="docker")
	I1101 08:30:06.554775 2316740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:30:06.554849 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:30:06.554896 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.573206 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.675389 2316740 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:30:06.678533 2316740 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:30:06.678557 2316740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:30:06.678567 2316740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 08:30:06.678631 2316740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 08:30:06.678654 2316740 start.go:296] duration metric: took 123.883271ms for postStartSetup
	I1101 08:30:06.678955 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:06.695040 2316740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/config.json ...
	I1101 08:30:06.695307 2316740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:30:06.695345 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.711415 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.812378 2316740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:30:06.816651 2316740 start.go:128] duration metric: took 12.925225236s to createHost
	I1101 08:30:06.816671 2316740 start.go:83] releasing machines lock for "addons-377223", held for 12.925351748s
	I1101 08:30:06.816737 2316740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377223
	I1101 08:30:06.832976 2316740 ssh_runner.go:195] Run: cat /version.json
	I1101 08:30:06.833028 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.833104 2316740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:30:06.833163 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:06.856390 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:06.863964 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:07.045805 2316740 ssh_runner.go:195] Run: systemctl --version
	I1101 08:30:07.051842 2316740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:30:07.088918 2316740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:30:07.092988 2316740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:30:07.093061 2316740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:30:07.120631 2316740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 08:30:07.120698 2316740 start.go:496] detecting cgroup driver to use...
	I1101 08:30:07.120743 2316740 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 08:30:07.120835 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:30:07.136999 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:30:07.149289 2316740 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:30:07.149351 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:30:07.166366 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:30:07.184170 2316740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:30:07.306023 2316740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:30:07.420575 2316740 docker.go:234] disabling docker service ...
	I1101 08:30:07.420692 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:30:07.442915 2316740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:30:07.455407 2316740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:30:07.564072 2316740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:30:07.684628 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:30:07.696736 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:30:07.709749 2316740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:30:07.709828 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.718138 2316740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:30:07.718223 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.726326 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.734411 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.742491 2316740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:30:07.750243 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.758370 2316740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.770885 2316740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:30:07.779197 2316740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:30:07.786567 2316740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:30:07.793795 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:07.896544 2316740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:30:08.016532 2316740 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:30:08.016642 2316740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:30:08.020591 2316740 start.go:564] Will wait 60s for crictl version
	I1101 08:30:08.020701 2316740 ssh_runner.go:195] Run: which crictl
	I1101 08:30:08.024572 2316740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:30:08.048185 2316740 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:30:08.048339 2316740 ssh_runner.go:195] Run: crio --version
	I1101 08:30:08.075422 2316740 ssh_runner.go:195] Run: crio --version
	I1101 08:30:08.110012 2316740 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:30:08.112819 2316740 cli_runner.go:164] Run: docker network inspect addons-377223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:30:08.128321 2316740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:30:08.132028 2316740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:30:08.141985 2316740 kubeadm.go:884] updating cluster {Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:30:08.142142 2316740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:30:08.142200 2316740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:30:08.172038 2316740 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:30:08.172064 2316740 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:30:08.172127 2316740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:30:08.197357 2316740 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:30:08.197382 2316740 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:30:08.197389 2316740 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:30:08.197507 2316740 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:30:08.197600 2316740 ssh_runner.go:195] Run: crio config
	I1101 08:30:08.262465 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:30:08.262538 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:30:08.262573 2316740 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:30:08.262624 2316740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377223 NodeName:addons-377223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:30:08.262769 2316740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377223"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:30:08.262865 2316740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:30:08.270486 2316740 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:30:08.270584 2316740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:30:08.278074 2316740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:30:08.290498 2316740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:30:08.303069 2316740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1101 08:30:08.315750 2316740 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:30:08.319175 2316740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:30:08.328992 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:08.442844 2316740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:08.458462 2316740 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223 for IP: 192.168.49.2
	I1101 08:30:08.458481 2316740 certs.go:195] generating shared ca certs ...
	I1101 08:30:08.458497 2316740 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:08.458618 2316740 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 08:30:09.004054 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt ...
	I1101 08:30:09.004101 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt: {Name:mkb30c251a0186d14ca3dc95f9f38db60acf13e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.004336 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key ...
	I1101 08:30:09.004354 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key: {Name:mk676e72c64736a65b6cd527cf9a075dbc322d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.004439 2316740 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 08:30:09.317720 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt ...
	I1101 08:30:09.317753 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt: {Name:mk097382b33d757885fbe3314ac20d0d846a401f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.317959 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key ...
	I1101 08:30:09.317973 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key: {Name:mk487123a20a0843902554f556877d9e807297c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:09.318066 2316740 certs.go:257] generating profile certs ...
	I1101 08:30:09.318127 2316740 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key
	I1101 08:30:09.318145 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt with IP's: []
	I1101 08:30:10.095776 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt ...
	I1101 08:30:10.095817 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: {Name:mk2e12a5ee979e835444f26baf6cea16dadadded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.096039 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key ...
	I1101 08:30:10.096052 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.key: {Name:mk32d7b806304f01fbf6fcad8c77561a2f7e70cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.096147 2316740 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a
	I1101 08:30:10.096168 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:30:10.181292 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a ...
	I1101 08:30:10.181346 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a: {Name:mk9864f04219f2e56a48a1df299509615ad1f08e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.181518 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a ...
	I1101 08:30:10.181532 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a: {Name:mka4a060e4e5958e4895fbd15cf4a7dc9b680a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.181617 2316740 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt.7c033e1a -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt
	I1101 08:30:10.181695 2316740 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key.7c033e1a -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key
	I1101 08:30:10.181752 2316740 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key
	I1101 08:30:10.181773 2316740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt with IP's: []
	I1101 08:30:10.721386 2316740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt ...
	I1101 08:30:10.721418 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt: {Name:mk273d3f416e5e8e0db2b485fbe082b549ff7a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.721594 2316740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key ...
	I1101 08:30:10.721607 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key: {Name:mk7931d74d94975d33ebde71a1fe88fe631527fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:10.721787 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 08:30:10.721824 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 08:30:10.721851 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:30:10.721881 2316740 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 08:30:10.722414 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:30:10.739546 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 08:30:10.756932 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:30:10.776841 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:30:10.795266 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:30:10.814465 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:30:10.831014 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:30:10.847300 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 08:30:10.864557 2316740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:30:10.881633 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:30:10.894009 2316740 ssh_runner.go:195] Run: openssl version
	I1101 08:30:10.899973 2316740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:30:10.908293 2316740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.911732 2316740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.911794 2316740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:30:10.952071 2316740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:30:10.960012 2316740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:30:10.963292 2316740 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:30:10.963378 2316740 kubeadm.go:401] StartCluster: {Name:addons-377223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:30:10.963467 2316740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:30:10.963522 2316740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:30:10.992650 2316740 cri.go:89] found id: ""
	I1101 08:30:10.992717 2316740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:30:11.000343 2316740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:30:11.009304 2316740 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:30:11.009382 2316740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:30:11.017505 2316740 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:30:11.017525 2316740 kubeadm.go:158] found existing configuration files:
	
	I1101 08:30:11.017575 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:30:11.025496 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:30:11.025560 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:30:11.032631 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:30:11.039948 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:30:11.040010 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:30:11.047169 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:30:11.054885 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:30:11.054951 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:30:11.062574 2316740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:30:11.070347 2316740 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:30:11.070416 2316740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:30:11.077975 2316740 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:30:11.118602 2316740 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:30:11.118921 2316740 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:30:11.147936 2316740 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:30:11.148032 2316740 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 08:30:11.148073 2316740 kubeadm.go:319] OS: Linux
	I1101 08:30:11.148146 2316740 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:30:11.148210 2316740 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 08:30:11.148279 2316740 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:30:11.148353 2316740 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:30:11.148422 2316740 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:30:11.148489 2316740 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:30:11.148556 2316740 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:30:11.148621 2316740 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:30:11.148679 2316740 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 08:30:11.218173 2316740 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:30:11.218299 2316740 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:30:11.218445 2316740 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:30:11.225738 2316740 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:30:11.228640 2316740 out.go:252]   - Generating certificates and keys ...
	I1101 08:30:11.228801 2316740 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:30:11.228913 2316740 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:30:12.642829 2316740 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:30:13.075401 2316740 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:30:13.753990 2316740 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:30:14.509744 2316740 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:30:15.043006 2316740 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:30:15.043165 2316740 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377223 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:30:15.546346 2316740 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:30:15.546501 2316740 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377223 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:30:16.764193 2316740 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:30:17.020568 2316740 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:30:17.749115 2316740 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:30:17.749443 2316740 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:30:18.236842 2316740 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:30:18.928577 2316740 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:30:19.690810 2316740 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:30:19.900238 2316740 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:30:20.084810 2316740 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:30:20.085593 2316740 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:30:20.088398 2316740 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:30:20.092011 2316740 out.go:252]   - Booting up control plane ...
	I1101 08:30:20.092126 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:30:20.092208 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:30:20.092278 2316740 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:30:20.108811 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:30:20.109135 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:30:20.116886 2316740 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:30:20.117184 2316740 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:30:20.117486 2316740 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:30:20.257388 2316740 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:30:20.257549 2316740 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:30:21.258606 2316740 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001678313s
	I1101 08:30:21.262995 2316740 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:30:21.263123 2316740 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:30:21.263245 2316740 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:30:21.263384 2316740 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:30:24.160269 2316740 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.89718019s
	I1101 08:30:26.828449 2316740 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.565805378s
	I1101 08:30:27.264807 2316740 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002062042s
	I1101 08:30:27.287387 2316740 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:30:27.300708 2316740 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:30:27.323512 2316740 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:30:27.323735 2316740 kubeadm.go:319] [mark-control-plane] Marking the node addons-377223 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:30:27.338362 2316740 kubeadm.go:319] [bootstrap-token] Using token: j41a3s.jdvrqm41b2wdvu6m
	I1101 08:30:27.341431 2316740 out.go:252]   - Configuring RBAC rules ...
	I1101 08:30:27.341554 2316740 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:30:27.349284 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:30:27.357187 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:30:27.361403 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:30:27.368716 2316740 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:30:27.372445 2316740 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:30:27.673508 2316740 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:30:28.130883 2316740 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:30:28.671347 2316740 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:30:28.672477 2316740 kubeadm.go:319] 
	I1101 08:30:28.672554 2316740 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:30:28.672560 2316740 kubeadm.go:319] 
	I1101 08:30:28.672641 2316740 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:30:28.672646 2316740 kubeadm.go:319] 
	I1101 08:30:28.672672 2316740 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:30:28.672734 2316740 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:30:28.672787 2316740 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:30:28.672791 2316740 kubeadm.go:319] 
	I1101 08:30:28.672848 2316740 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:30:28.672853 2316740 kubeadm.go:319] 
	I1101 08:30:28.672903 2316740 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:30:28.672908 2316740 kubeadm.go:319] 
	I1101 08:30:28.672962 2316740 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:30:28.673040 2316740 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:30:28.673111 2316740 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:30:28.673115 2316740 kubeadm.go:319] 
	I1101 08:30:28.673204 2316740 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:30:28.673285 2316740 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:30:28.673289 2316740 kubeadm.go:319] 
	I1101 08:30:28.673395 2316740 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j41a3s.jdvrqm41b2wdvu6m \
	I1101 08:30:28.673504 2316740 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 08:30:28.673526 2316740 kubeadm.go:319] 	--control-plane 
	I1101 08:30:28.673530 2316740 kubeadm.go:319] 
	I1101 08:30:28.673619 2316740 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:30:28.673623 2316740 kubeadm.go:319] 
	I1101 08:30:28.673709 2316740 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j41a3s.jdvrqm41b2wdvu6m \
	I1101 08:30:28.673817 2316740 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 08:30:28.675659 2316740 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 08:30:28.675916 2316740 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 08:30:28.676027 2316740 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:30:28.676059 2316740 cni.go:84] Creating CNI manager for ""
	I1101 08:30:28.676068 2316740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:30:28.679180 2316740 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:30:28.682199 2316740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:30:28.686104 2316740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:30:28.686164 2316740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:30:28.698831 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:30:28.982137 2316740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:30:28.982230 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:28.982279 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377223 minikube.k8s.io/updated_at=2025_11_01T08_30_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-377223 minikube.k8s.io/primary=true
	I1101 08:30:29.127702 2316740 ops.go:34] apiserver oom_adj: -16
	I1101 08:30:29.127830 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:29.628284 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:30.128002 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:30.628236 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:31.128448 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:31.628092 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:32.128948 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:32.627940 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.128002 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.627999 2316740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:33.732935 2316740 kubeadm.go:1114] duration metric: took 4.750760568s to wait for elevateKubeSystemPrivileges
	I1101 08:30:33.732970 2316740 kubeadm.go:403] duration metric: took 22.76959635s to StartCluster
	I1101 08:30:33.732987 2316740 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:33.733096 2316740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:30:33.733554 2316740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:33.733744 2316740 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:30:33.733872 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:30:33.734105 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:33.734133 2316740 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:30:33.734226 2316740 addons.go:70] Setting yakd=true in profile "addons-377223"
	I1101 08:30:33.734240 2316740 addons.go:239] Setting addon yakd=true in "addons-377223"
	I1101 08:30:33.734262 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.734819 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.735353 2316740 addons.go:70] Setting metrics-server=true in profile "addons-377223"
	I1101 08:30:33.735379 2316740 addons.go:239] Setting addon metrics-server=true in "addons-377223"
	I1101 08:30:33.735401 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.735792 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.735961 2316740 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377223"
	I1101 08:30:33.735983 2316740 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377223"
	I1101 08:30:33.736029 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.736447 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.737250 2316740 addons.go:70] Setting registry=true in profile "addons-377223"
	I1101 08:30:33.737305 2316740 addons.go:239] Setting addon registry=true in "addons-377223"
	I1101 08:30:33.737343 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.737853 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.738447 2316740 addons.go:70] Setting registry-creds=true in profile "addons-377223"
	I1101 08:30:33.738475 2316740 addons.go:239] Setting addon registry-creds=true in "addons-377223"
	I1101 08:30:33.738506 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.738901 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.739947 2316740 out.go:179] * Verifying Kubernetes components...
	I1101 08:30:33.747942 2316740 addons.go:70] Setting storage-provisioner=true in profile "addons-377223"
	I1101 08:30:33.748014 2316740 addons.go:239] Setting addon storage-provisioner=true in "addons-377223"
	I1101 08:30:33.748084 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.748516 2316740 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377223"
	I1101 08:30:33.748545 2316740 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377223"
	I1101 08:30:33.748769 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.751054 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.754237 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766175 2316740 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377223"
	I1101 08:30:33.766196 2316740 addons.go:70] Setting default-storageclass=true in profile "addons-377223"
	I1101 08:30:33.766209 2316740 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377223"
	I1101 08:30:33.766223 2316740 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377223"
	I1101 08:30:33.766625 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766727 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.768301 2316740 addons.go:70] Setting gcp-auth=true in profile "addons-377223"
	I1101 08:30:33.768337 2316740 mustload.go:66] Loading cluster: addons-377223
	I1101 08:30:33.768555 2316740 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:33.768795 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.784010 2316740 addons.go:70] Setting ingress=true in profile "addons-377223"
	I1101 08:30:33.784083 2316740 addons.go:239] Setting addon ingress=true in "addons-377223"
	I1101 08:30:33.784129 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.784777 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.766181 2316740 addons.go:70] Setting cloud-spanner=true in profile "addons-377223"
	I1101 08:30:33.791499 2316740 addons.go:239] Setting addon cloud-spanner=true in "addons-377223"
	I1101 08:30:33.791697 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.766190 2316740 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377223"
	I1101 08:30:33.797565 2316740 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377223"
	I1101 08:30:33.797626 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.798201 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.808295 2316740 addons.go:70] Setting ingress-dns=true in profile "addons-377223"
	I1101 08:30:33.808344 2316740 addons.go:239] Setting addon ingress-dns=true in "addons-377223"
	I1101 08:30:33.808387 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.808860 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.814898 2316740 addons.go:70] Setting volcano=true in profile "addons-377223"
	I1101 08:30:33.814989 2316740 addons.go:239] Setting addon volcano=true in "addons-377223"
	I1101 08:30:33.815072 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.815810 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.826738 2316740 addons.go:70] Setting inspektor-gadget=true in profile "addons-377223"
	I1101 08:30:33.826791 2316740 addons.go:239] Setting addon inspektor-gadget=true in "addons-377223"
	I1101 08:30:33.826826 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.827467 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.842040 2316740 addons.go:70] Setting volumesnapshots=true in profile "addons-377223"
	I1101 08:30:33.842108 2316740 addons.go:239] Setting addon volumesnapshots=true in "addons-377223"
	I1101 08:30:33.842155 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:33.842652 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.853046 2316740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:33.871964 2316740 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:30:33.875027 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:33.875094 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:30:33.875176 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.912635 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:33.937128 2316740 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:30:33.940610 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:30:33.940672 2316740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:30:33.940736 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.940901 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:30:33.945933 2316740 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:30:33.951679 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:30:33.951703 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:30:33.951792 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:33.997699 2316740 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:30:33.997900 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:30:34.019894 2316740 addons.go:239] Setting addon default-storageclass=true in "addons-377223"
	I1101 08:30:34.020000 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.042883 2316740 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:30:34.044715 2316740 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:30:34.046000 2316740 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:34.046153 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:30:34.046322 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.054282 2316740 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377223"
	I1101 08:30:34.054324 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.054752 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:34.055191 2316740 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1101 08:30:34.055909 2316740 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:30:34.056640 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:34.065606 2316740 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:34.065627 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:30:34.065697 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.077763 2316740 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:34.077788 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:30:34.077862 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.087530 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:34.091293 2316740 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:34.091319 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:30:34.091386 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.094258 2316740 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:30:34.094491 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:30:34.094505 2316740 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:30:34.094582 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.097994 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:30:34.099213 2316740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:30:34.102743 2316740 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:30:34.103082 2316740 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:30:34.102936 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.104810 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:30:34.137226 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.154749 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:30:34.155007 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:30:34.156266 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:34.170719 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:30:34.171616 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:34.171762 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:30:34.171777 2316740 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:30:34.171858 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.177719 2316740 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:30:34.179566 2316740 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:34.179593 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:30:34.179663 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.197789 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:30:34.198154 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.198872 2316740 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:34.198887 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:30:34.198940 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.208551 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:30:34.210914 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.212547 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.217087 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:30:34.221153 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.225098 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:30:34.225261 2316740 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:30:34.228579 2316740 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:30:34.232166 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:30:34.232192 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:30:34.232258 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.232477 2316740 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:30:34.236307 2316740 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:34.240418 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:30:34.240545 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.312389 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.343911 2316740 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:34.343930 2316740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:30:34.343991 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:34.354044 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.360139 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.363030 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.378878 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.393446 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.394948 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	W1101 08:30:34.408171 2316740 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:34.408266 2316740 retry.go:31] will retry after 222.533945ms: ssh: handshake failed: EOF
	I1101 08:30:34.411787 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.423273 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	W1101 08:30:34.424582 2316740 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:34.424603 2316740 retry.go:31] will retry after 192.804546ms: ssh: handshake failed: EOF
	I1101 08:30:34.431188 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:34.453083 2316740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:34.884643 2316740 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:34.884706 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:30:34.934383 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:35.013658 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:35.032088 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:35.054602 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:30:35.054621 2316740 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:30:35.058160 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:35.094505 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:30:35.094580 2316740 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:30:35.109917 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:35.124477 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:30:35.124605 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:30:35.209364 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:30:35.209441 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:30:35.249185 2316740 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:35.249270 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:30:35.250001 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:35.269393 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:35.272211 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:35.301616 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:35.304893 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:30:35.304962 2316740 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:30:35.339749 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:35.342628 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:30:35.342696 2316740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:30:35.413354 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:30:35.413426 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:30:35.430555 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:35.467997 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:30:35.468078 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:30:35.471113 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:30:35.471181 2316740 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:30:35.506612 2316740 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:35.506687 2316740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:30:35.558851 2316740 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:30:35.558923 2316740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:30:35.632166 2316740 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:35.632237 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:30:35.648025 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:30:35.648106 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:30:35.674513 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:35.770806 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:30:35.770882 2316740 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:30:35.818247 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:35.855259 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:30:35.855331 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:30:35.887523 2316740 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.788278629s)
	I1101 08:30:35.887687 2316740 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:30:35.887612 2316740 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.434508358s)
	I1101 08:30:35.889241 2316740 node_ready.go:35] waiting up to 6m0s for node "addons-377223" to be "Ready" ...
	I1101 08:30:35.977095 2316740 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:35.977113 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:30:36.037749 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:30:36.037824 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:30:36.354862 2316740 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:30:36.354935 2316740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:30:36.396849 2316740 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377223" context rescaled to 1 replicas
	I1101 08:30:36.439499 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:36.566613 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:30:36.566681 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:30:36.722152 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:30:36.722225 2316740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:30:36.888321 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:30:36.888392 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:30:37.034383 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:30:37.034457 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:30:37.230972 2316740 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:37.231033 2316740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:30:37.453073 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1101 08:30:37.906539 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:38.706825 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.674665709s)
	I1101 08:30:38.706923 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.693239231s)
	I1101 08:30:38.869227 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.81102223s)
	W1101 08:30:38.869311 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:38.869342 2316740 retry.go:31] will retry after 222.053822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:38.869419 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.759429204s)
	I1101 08:30:39.092424 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:39.779181 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.509706858s)
	I1101 08:30:39.779236 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.506970965s)
	I1101 08:30:39.779280 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.477593247s)
	I1101 08:30:39.779451 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.439635428s)
	I1101 08:30:39.779499 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.348886327s)
	I1101 08:30:39.779511 2316740 addons.go:480] Verifying addon registry=true in "addons-377223"
	I1101 08:30:39.779724 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.105140737s)
	I1101 08:30:39.779738 2316740 addons.go:480] Verifying addon metrics-server=true in "addons-377223"
	I1101 08:30:39.779773 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.96146379s)
	I1101 08:30:39.780481 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.53042183s)
	I1101 08:30:39.780503 2316740 addons.go:480] Verifying addon ingress=true in "addons-377223"
	I1101 08:30:39.783829 2316740 out.go:179] * Verifying ingress addon...
	I1101 08:30:39.783870 2316740 out.go:179] * Verifying registry addon...
	I1101 08:30:39.783944 2316740 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377223 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:30:39.788345 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:30:39.788413 2316740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:30:39.805913 2316740 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:30:39.805933 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.813904 2316740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:39.813928 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.889879 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.450289844s)
	W1101 08:30:39.889917 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:39.889936 2316740 retry.go:31] will retry after 371.961333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1101 08:30:39.908815 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:40.262476 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:40.297080 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.297268 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.521352 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.068189344s)
	I1101 08:30:40.521387 2316740 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-377223"
	I1101 08:30:40.525905 2316740 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:30:40.529786 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:30:40.539331 2316740 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:40.539355 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.622877 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.530362405s)
	W1101 08:30:40.622921 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:40.622941 2316740 retry.go:31] will retry after 235.820561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:40.794876 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.795117 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.859343 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:41.033347 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.293843 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.294923 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.533738 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.773479 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:30:41.773585 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:41.797770 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:41.798731 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.800281 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.912339 2316740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:30:41.924677 2316740 addons.go:239] Setting addon gcp-auth=true in "addons-377223"
	I1101 08:30:41.924724 2316740 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:30:41.925159 2316740 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:30:41.941635 2316740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:30:41.941700 2316740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:30:41.958999 2316740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:30:42.035039 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.292758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.293333 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:42.393200 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:42.533221 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.798770 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.798971 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.033379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.137272 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.874746322s)
	I1101 08:30:43.137359 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.277989671s)
	W1101 08:30:43.137384 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.137402 2316740 retry.go:31] will retry after 281.783242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.137439 2316740 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.195787148s)
	I1101 08:30:43.140600 2316740 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:30:43.143478 2316740 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:43.146218 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:30:43.146244 2316740 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:30:43.163955 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:30:43.163982 2316740 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:30:43.176557 2316740 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:43.176580 2316740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:30:43.189931 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:43.292867 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.293233 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.419979 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:43.533607 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.739045 2316740 addons.go:480] Verifying addon gcp-auth=true in "addons-377223"
	I1101 08:30:43.743517 2316740 out.go:179] * Verifying gcp-auth addon...
	I1101 08:30:43.747123 2316740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:30:43.762688 2316740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:30:43.762713 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.862021 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.862490 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.033920 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.249955 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.292271 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.292588 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:44.326538 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:44.326624 2316740 retry.go:31] will retry after 469.798153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:44.533915 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.749845 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.792173 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.792346 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.797588 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:44.892723 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:45.048267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.252460 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.294692 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.296176 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.533033 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:45.721919 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:45.721952 2316740 retry.go:31] will retry after 734.26527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:45.750557 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.791702 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.792046 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.032909 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.250927 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.292762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.292813 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.456852 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:46.533185 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.749831 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.793574 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.794159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.033961 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.250011 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:47.255507 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:47.255537 2316740 retry.go:31] will retry after 1.610799864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:47.291513 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.292218 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:47.395024 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:47.533334 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.750662 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.792095 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.792227 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.033721 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.250622 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.291758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.291923 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.535961 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.751224 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.792335 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.792539 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.866871 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:49.033484 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.251004 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.293785 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.294082 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.533619 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:49.650997 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:49.651027 2316740 retry.go:31] will retry after 1.785530818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:49.749687 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.791889 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.791978 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:49.892704 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:50.032880 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.249921 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.291882 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.292073 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.533288 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.750175 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.792615 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.792797 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.033356 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.250650 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.291354 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.291493 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.436749 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:51.533306 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.750649 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.793911 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.794417 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.033683 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.251195 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:52.263502 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:52.263530 2316740 retry.go:31] will retry after 4.188195922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:52.291843 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.292187 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:52.392693 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:52.532509 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.750505 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.791268 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.791406 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.033116 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.250088 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.292087 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.292278 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.533006 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.750794 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.791583 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.792782 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.033302 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.250374 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.292315 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.292480 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.533477 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.750578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.791364 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.791686 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:54.892341 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:55.034703 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.251305 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.291605 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.291653 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.533483 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.750817 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.792575 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.792894 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.033049 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.250020 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.292268 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.292376 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.452177 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:56.538248 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.750616 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.792269 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.793094 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:56.893223 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:57.034765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.250940 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.292191 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.293635 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:57.315874 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.315953 2316740 retry.go:31] will retry after 3.238426466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.533453 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.750209 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.792660 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.793083 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.033543 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.250736 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.291946 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.292050 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.534250 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.751028 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.792455 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.792857 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.032442 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.250798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.291606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.292281 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:59.392209 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:30:59.532999 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.750689 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.792177 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.792344 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.057057 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.252068 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.294341 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.299970 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.533339 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.555370 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:00.750770 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.792344 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.792369 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.034240 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.251685 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.293336 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.294231 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:31:01.369409 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.369488 2316740 retry.go:31] will retry after 12.115012737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:31:01.392942 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:01.532563 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.751251 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.792379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:01.792776 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.033719 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.250731 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.291457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.291558 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.533924 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.750645 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.792306 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.796628 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.033037 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.250381 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.292302 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.292512 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.533422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.750316 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.792693 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.792900 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:03.892553 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:04.033617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.250397 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.292814 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.293086 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.532566 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.750698 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.791714 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.791912 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.033368 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.250383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.294379 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.295203 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.533157 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.750537 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.791979 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.792270 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:05.893009 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:06.033143 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.250557 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.291469 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.291800 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.533767 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.750751 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.791801 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.792109 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.033108 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.250010 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.292088 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.292542 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.534711 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.750352 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.791384 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.791642 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.033380 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.251418 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.292096 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.292238 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:08.393007 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:08.536297 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.750606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.791722 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.792316 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.033844 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.250841 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.291902 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.292104 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.533356 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.750294 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.791933 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.792664 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.033938 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.250646 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.291606 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:10.291964 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.533375 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.750444 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.791711 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:10.791949 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:10.892850 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:11.032857 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.249881 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.291904 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:11.292159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.532691 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.750460 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.791724 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:11.792093 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.033699 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.250569 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.291406 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:12.291650 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.533457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.750648 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.791547 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:12.791682 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.033425 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.250260 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.291430 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:13.291626 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:13.392627 2316740 node_ready.go:57] node "addons-377223" has "Ready":"False" status (will retry)
	I1101 08:31:13.484854 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:13.533260 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.750066 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.793601 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.794099 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.033898 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.250202 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:31:14.280255 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:14.280287 2316740 retry.go:31] will retry after 14.15849595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:14.291879 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.292310 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.532995 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.750891 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.792289 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:14.792324 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.032898 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.265130 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.306293 2316740 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:31:15.306317 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:15.315194 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.397234 2316740 node_ready.go:49] node "addons-377223" is "Ready"
	I1101 08:31:15.397262 2316740 node_ready.go:38] duration metric: took 39.50787249s for node "addons-377223" to be "Ready" ...
	I1101 08:31:15.397275 2316740 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:31:15.397334 2316740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:31:15.411805 2316740 api_server.go:72] duration metric: took 41.678035065s to wait for apiserver process to appear ...
	I1101 08:31:15.411830 2316740 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:31:15.411884 2316740 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:31:15.423082 2316740 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:31:15.426709 2316740 api_server.go:141] control plane version: v1.34.1
	I1101 08:31:15.426734 2316740 api_server.go:131] duration metric: took 14.896769ms to wait for apiserver health ...
	I1101 08:31:15.426743 2316740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:31:15.465863 2316740 system_pods.go:59] 19 kube-system pods found
	I1101 08:31:15.465900 2316740 system_pods.go:61] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending
	I1101 08:31:15.465908 2316740 system_pods.go:61] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending
	I1101 08:31:15.465946 2316740 system_pods.go:61] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.465958 2316740 system_pods.go:61] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.465965 2316740 system_pods.go:61] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.465969 2316740 system_pods.go:61] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.465980 2316740 system_pods.go:61] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.465985 2316740 system_pods.go:61] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.465990 2316740 system_pods.go:61] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending
	I1101 08:31:15.466017 2316740 system_pods.go:61] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.466035 2316740 system_pods.go:61] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.466046 2316740 system_pods.go:61] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending
	I1101 08:31:15.466051 2316740 system_pods.go:61] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.466059 2316740 system_pods.go:61] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.466068 2316740 system_pods.go:61] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending
	I1101 08:31:15.466074 2316740 system_pods.go:61] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.466078 2316740 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending
	I1101 08:31:15.466088 2316740 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending
	I1101 08:31:15.466094 2316740 system_pods.go:61] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.466113 2316740 system_pods.go:74] duration metric: took 39.343916ms to wait for pod list to return data ...
	I1101 08:31:15.466123 2316740 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:31:15.525542 2316740 default_sa.go:45] found service account: "default"
	I1101 08:31:15.525568 2316740 default_sa.go:55] duration metric: took 59.429847ms for default service account to be created ...
	I1101 08:31:15.525579 2316740 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:31:15.597833 2316740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:31:15.597859 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.605078 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:15.605113 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending
	I1101 08:31:15.605121 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending
	I1101 08:31:15.605157 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.605171 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.605177 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.605182 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.605187 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.605198 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.605202 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending
	I1101 08:31:15.605206 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.605226 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.605244 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:15.605249 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.605260 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.605264 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending
	I1101 08:31:15.605275 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.605279 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending
	I1101 08:31:15.605284 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending
	I1101 08:31:15.605307 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.605330 2316740 retry.go:31] will retry after 227.808824ms: missing components: kube-dns
	I1101 08:31:15.753002 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.796467 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:15.796708 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.843706 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:15.843745 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:31:15.843754 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:15.843761 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:15.843795 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending
	I1101 08:31:15.843800 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:15.843805 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:15.843810 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:15.843814 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:15.843824 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:15.843828 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:15.843834 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:15.843841 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:15.843875 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending
	I1101 08:31:15.843882 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:15.843900 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:15.843917 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending
	I1101 08:31:15.843925 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:15.843932 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:15.843942 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:15.843958 2316740 retry.go:31] will retry after 263.73777ms: missing components: kube-dns
	I1101 08:31:16.034162 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.140994 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:16.141034 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:31:16.141043 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:16.141085 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:16.141101 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:31:16.141106 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:16.141112 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:16.141116 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:16.141122 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:16.141136 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:16.141160 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:16.141166 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:16.141172 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:16.141180 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:31:16.141196 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:16.141211 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:16.141217 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:31:16.141230 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.141239 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.141247 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:31:16.141273 2316740 retry.go:31] will retry after 339.770132ms: missing components: kube-dns
	I1101 08:31:16.250768 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.316421 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:16.316630 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.525141 2316740 system_pods.go:86] 19 kube-system pods found
	I1101 08:31:16.525181 2316740 system_pods.go:89] "coredns-66bc5c9577-jfpff" [348e6114-7b6c-48da-8290-9951dab8c754] Running
	I1101 08:31:16.525193 2316740 system_pods.go:89] "csi-hostpath-attacher-0" [1dd89e79-ddca-42fd-b7a1-af8280e00ad1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:31:16.525243 2316740 system_pods.go:89] "csi-hostpath-resizer-0" [5a926beb-ddcb-44da-8dc4-1da2b2d482b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:31:16.525258 2316740 system_pods.go:89] "csi-hostpathplugin-9rxph" [c14a0060-c922-484b-aadf-c2df39706fad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:31:16.525264 2316740 system_pods.go:89] "etcd-addons-377223" [8e162b68-3c71-4956-a399-e73e3cd2cc56] Running
	I1101 08:31:16.525276 2316740 system_pods.go:89] "kindnet-g288l" [47d7d7be-916a-4b37-80b7-6c05dd045040] Running
	I1101 08:31:16.525280 2316740 system_pods.go:89] "kube-apiserver-addons-377223" [a914d233-7cef-4286-af79-87ad97a5f593] Running
	I1101 08:31:16.525285 2316740 system_pods.go:89] "kube-controller-manager-addons-377223" [eacdf52d-dccf-49ac-82b0-fc999bd249d4] Running
	I1101 08:31:16.525314 2316740 system_pods.go:89] "kube-ingress-dns-minikube" [f83561ff-b559-4279-8112-708aa3b82897] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:31:16.525318 2316740 system_pods.go:89] "kube-proxy-8p9ks" [d28cd1b8-2fa2-4b2c-b3be-6909dbfde171] Running
	I1101 08:31:16.525340 2316740 system_pods.go:89] "kube-scheduler-addons-377223" [a4f1d0d2-c157-4ecf-8503-cf2d3ffc7018] Running
	I1101 08:31:16.525348 2316740 system_pods.go:89] "metrics-server-85b7d694d7-w9zzf" [648c1d34-d194-4696-9225-2f20f84b51df] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:31:16.525361 2316740 system_pods.go:89] "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:31:16.525368 2316740 system_pods.go:89] "registry-6b586f9694-hgg7l" [09f8c054-e829-4a8f-99ae-15f1199f9ce2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:31:16.525378 2316740 system_pods.go:89] "registry-creds-764b6fb674-jr4nd" [2162454c-4ead-4a3a-aeb4-e07bbd81c04c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:31:16.525385 2316740 system_pods.go:89] "registry-proxy-ntzvs" [31f9ce22-49ba-49b5-8f43-927666ffacc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:31:16.525406 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jjjfk" [0348bdd6-0344-4a9f-9873-9cc11add902e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.525421 2316740 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xjs28" [913d4c88-bbf8-4e4e-9beb-87dcbc777d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:31:16.525426 2316740 system_pods.go:89] "storage-provisioner" [e7ecfd32-4b4e-4f67-a9be-7310f1b83c46] Running
	I1101 08:31:16.525439 2316740 system_pods.go:126] duration metric: took 999.854307ms to wait for k8s-apps to be running ...
	I1101 08:31:16.525447 2316740 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:31:16.525517 2316740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:31:16.545979 2316740 system_svc.go:56] duration metric: took 20.523276ms WaitForService to wait for kubelet
	I1101 08:31:16.546051 2316740 kubeadm.go:587] duration metric: took 42.812285127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:31:16.546086 2316740 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:31:16.549589 2316740 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 08:31:16.549668 2316740 node_conditions.go:123] node cpu capacity is 2
	I1101 08:31:16.549695 2316740 node_conditions.go:105] duration metric: took 3.58939ms to run NodePressure ...
	I1101 08:31:16.549720 2316740 start.go:242] waiting for startup goroutines ...
	I1101 08:31:16.608198 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.750519 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.793566 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:16.794208 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.036434 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.251118 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.293917 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.294332 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:17.535410 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.750727 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.791750 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:17.791951 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.034026 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.251552 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.292949 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:18.294102 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.541038 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.750444 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.793527 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:18.793946 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.034127 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:19.250603 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.293542 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:19.294068 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.534417 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:19.750706 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.793255 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:19.793674 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.034604 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:20.251083 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.293556 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:20.293715 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.532720 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:20.750644 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.792798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:20.793122 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.033835 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:21.250119 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.293688 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.294058 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:21.534033 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:21.750890 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.792512 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.792617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:22.034118 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:22.250621 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:22.293339 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.293762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:22.533664 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:22.750845 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:22.793118 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.793478 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:23.035342 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:23.250471 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.292719 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:23.293439 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.534283 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:23.750126 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.792263 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.792751 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.033954 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:24.251228 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.293185 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:24.293605 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.533430 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:24.750544 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.792879 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:24.793278 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.033819 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:25.252333 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.294583 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.295058 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:25.536321 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:25.752243 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.793590 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:25.793846 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:26.034046 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:26.251568 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:26.358656 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:26.359082 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:26.534713 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:26.751221 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:26.792727 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:26.797385 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:27.034273 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:27.250930 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.293859 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:27.294132 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:27.534362 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:27.750862 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.793499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:27.794072 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.034096 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:28.251436 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.292195 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:28.292215 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.439581 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:28.536990 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:28.750765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.792393 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:28.792799 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:29.033934 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:29.250267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:31:29.297497 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:29.297573 2316740 retry.go:31] will retry after 19.615074116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:29.298115 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:29.298319 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:29.533315 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:29.750232 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:29.792440 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:29.792875 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:30.034607 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:30.251563 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:30.293617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:30.294132 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:30.534104 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:30.750499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:30.792743 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:30.792897 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.033200 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:31.250569 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:31.292714 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.292864 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:31.533121 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:31.750109 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:31.792895 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:31.793145 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:32.033253 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:32.250395 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:32.293486 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:32.293924 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:32.533433 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:32.750246 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:32.791146 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:32.791482 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.034079 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:33.250434 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:33.291685 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.292348 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:33.534082 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:33.750934 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:33.792617 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:33.793609 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:34.033976 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:34.250734 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:34.292567 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:34.292784 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:34.533297 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:34.750127 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:34.791807 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:34.792510 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:35.034443 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:35.250456 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:35.292413 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:35.292717 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:35.534690 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:35.751158 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:35.793203 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:35.793532 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.034489 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:36.250450 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:36.291998 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:36.292104 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.533502 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:36.750393 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:36.791669 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:36.792025 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.033596 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:37.250292 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:37.291765 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:37.291995 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.533134 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:37.749986 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:37.792800 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:37.793015 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:38.034384 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:38.250445 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:38.293013 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:38.293315 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:38.537610 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:38.750380 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:38.792001 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:38.792513 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:39.033999 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:39.250450 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:39.291789 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:39.292236 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:39.533684 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:39.750390 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:39.793330 2316740 kapi.go:107] duration metric: took 1m0.004984544s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:31:39.793807 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:40.033823 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:40.250865 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:40.292131 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:40.533889 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:40.750578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:40.791336 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:41.033856 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:41.249905 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:41.292728 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:41.536927 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:41.750075 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:41.792360 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:42.034502 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:42.251398 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:42.291966 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:42.534783 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:42.750939 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:42.791874 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:43.044426 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:43.254267 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:43.292838 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:43.535085 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:43.751271 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:43.852595 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:44.033031 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:44.250876 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:44.292497 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:44.535011 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:44.749738 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:44.791710 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:45.034208 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:45.252580 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:45.303373 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:45.533980 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:45.750383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:45.791263 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:46.033975 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:46.250357 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:46.295420 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:46.537383 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:46.750030 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:46.795072 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:47.033954 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:47.251620 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:47.306644 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:47.533405 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:47.749884 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:47.824108 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.034366 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:48.250186 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:48.292156 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.537617 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:48.750208 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:48.792145 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:48.913536 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:49.033285 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:49.250422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:49.292270 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:49.533723 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:49.751408 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:49.791836 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:50.034455 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:50.059568 2316740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.145989892s)
	W1101 08:31:50.059653 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:50.059685 2316740 retry.go:31] will retry after 42.393662681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:50.250837 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:50.291945 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:50.534057 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:50.750482 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:50.791933 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:51.034194 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:51.250391 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:51.293306 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:51.533696 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:51.750538 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:51.791938 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:52.033798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:52.250286 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:52.292093 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:52.549499 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:52.751044 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:52.793917 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:53.033494 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:53.250983 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:53.292317 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:53.536148 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:53.750514 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:53.791883 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:54.033491 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:54.250461 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:54.291544 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:54.541159 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:54.750428 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:54.792435 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:55.034338 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:55.250869 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:55.291913 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:55.533683 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:55.751031 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:55.792689 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:56.033510 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:56.250998 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:56.292345 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:56.534013 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:56.752224 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:56.792573 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:57.033262 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:57.250407 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:57.291506 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:57.534524 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:57.750728 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:57.851283 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:58.034951 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:58.251317 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:58.292608 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:58.540582 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:58.751171 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:58.792254 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:59.034292 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:59.251628 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:59.292875 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:59.533762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:59.751332 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:59.791367 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:00.045987 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:00.254350 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:00.298934 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:00.553156 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:00.753578 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:00.794530 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:01.033422 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:01.251752 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:01.291926 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:01.532971 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:01.750949 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:01.792228 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:02.036758 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:02.251511 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:02.291593 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:02.534021 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:02.751457 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:02.792421 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:03.034158 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:03.251762 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:03.295009 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:03.534110 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:03.752507 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:03.797892 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:04.033827 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:04.250421 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:04.292150 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:04.534041 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:04.750503 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:04.791658 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:05.034398 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:05.249956 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:05.291634 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:05.534390 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:05.750202 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:05.795507 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:06.033633 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:06.250488 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:06.291937 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:06.533821 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:32:06.751354 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:06.792312 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:07.033129 2316740 kapi.go:107] duration metric: took 1m26.503330863s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:32:07.251052 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:07.292035 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:07.750233 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:07.791816 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:08.251490 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:08.291334 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:08.750709 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:08.791758 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:09.250670 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:09.291529 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:09.751252 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:09.792259 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:10.250604 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:10.291642 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:10.751069 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:10.791843 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:11.250838 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:11.291997 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:11.750140 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:11.791916 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:12.250041 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:12.291699 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:12.749931 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:12.792095 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:13.251200 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:13.292181 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:13.749760 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:13.791706 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:14.250099 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:14.291926 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:14.750150 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:14.792067 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:15.249844 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:15.291819 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:15.750708 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:15.791595 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:16.249515 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:16.291369 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:16.750742 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:16.791953 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:17.250234 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:17.292273 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:17.750798 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:17.791464 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:18.250824 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:18.291734 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:18.750192 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:18.792103 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:19.250626 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:19.291303 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:19.750820 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:19.852615 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:20.250463 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:20.291377 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:20.750739 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:20.791782 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:21.250926 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:21.292337 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:21.750706 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:21.792025 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:22.251285 2316740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:32:22.351617 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:22.750946 2316740 kapi.go:107] duration metric: took 1m39.003820912s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:32:22.752528 2316740 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377223 cluster.
	I1101 08:32:22.753464 2316740 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:32:22.754453 2316740 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:32:22.791998 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:23.292629 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:23.792100 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:24.292107 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:24.792104 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:25.291647 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:25.791784 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:26.292368 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:26.791490 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:27.292074 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:27.792167 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:28.292159 2316740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:32:28.801365 2316740 kapi.go:107] duration metric: took 1m49.012941762s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:32:32.453609 2316740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:32:33.299536 2316740 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:32:33.299627 2316740 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:32:33.302827 2316740 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, storage-provisioner, default-storageclass, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1101 08:32:33.305514 2316740 addons.go:515] duration metric: took 1m59.571357205s for enable addons: enabled=[registry-creds amd-gpu-device-plugin storage-provisioner default-storageclass cloud-spanner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1101 08:32:33.305561 2316740 start.go:247] waiting for cluster config update ...
	I1101 08:32:33.305583 2316740 start.go:256] writing updated cluster config ...
	I1101 08:32:33.305885 2316740 ssh_runner.go:195] Run: rm -f paused
	I1101 08:32:33.310031 2316740 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:33.314220 2316740 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jfpff" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.319743 2316740 pod_ready.go:94] pod "coredns-66bc5c9577-jfpff" is "Ready"
	I1101 08:32:33.319768 2316740 pod_ready.go:86] duration metric: took 5.526508ms for pod "coredns-66bc5c9577-jfpff" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.322388 2316740 pod_ready.go:83] waiting for pod "etcd-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.327109 2316740 pod_ready.go:94] pod "etcd-addons-377223" is "Ready"
	I1101 08:32:33.327185 2316740 pod_ready.go:86] duration metric: took 4.769582ms for pod "etcd-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.329769 2316740 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.334540 2316740 pod_ready.go:94] pod "kube-apiserver-addons-377223" is "Ready"
	I1101 08:32:33.334621 2316740 pod_ready.go:86] duration metric: took 4.828476ms for pod "kube-apiserver-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.337098 2316740 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.714616 2316740 pod_ready.go:94] pod "kube-controller-manager-addons-377223" is "Ready"
	I1101 08:32:33.714685 2316740 pod_ready.go:86] duration metric: took 377.559313ms for pod "kube-controller-manager-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:33.913627 2316740 pod_ready.go:83] waiting for pod "kube-proxy-8p9ks" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.314347 2316740 pod_ready.go:94] pod "kube-proxy-8p9ks" is "Ready"
	I1101 08:32:34.314372 2316740 pod_ready.go:86] duration metric: took 400.720811ms for pod "kube-proxy-8p9ks" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.514318 2316740 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.914076 2316740 pod_ready.go:94] pod "kube-scheduler-addons-377223" is "Ready"
	I1101 08:32:34.914102 2316740 pod_ready.go:86] duration metric: took 399.705202ms for pod "kube-scheduler-addons-377223" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:34.914115 2316740 pod_ready.go:40] duration metric: took 1.604056909s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:34.964978 2316740 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 08:32:34.971421 2316740 out.go:179] * Done! kubectl is now configured to use "addons-377223" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:33:03 addons-377223 crio[830]: time="2025-11-01T08:33:03.560723544Z" level=info msg="Started container" PID=5461 containerID=9ab73c5f83cafd3f8d06ff79f901b1b655606aca74ef67b33ff099f1c9215356 description=default/test-local-path/busybox id=43e397e4-f362-4d0f-9006-241a83a29990 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be967e33f2d3e188bf253464c4f9723d8f9d37fa329f609f05eca322634d45b9
	Nov 01 08:33:04 addons-377223 crio[830]: time="2025-11-01T08:33:04.849572446Z" level=info msg="Stopping pod sandbox: be967e33f2d3e188bf253464c4f9723d8f9d37fa329f609f05eca322634d45b9" id=8f0dcbb5-1935-4cda-92e2-9561cec1b253 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:33:04 addons-377223 crio[830]: time="2025-11-01T08:33:04.849831318Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:be967e33f2d3e188bf253464c4f9723d8f9d37fa329f609f05eca322634d45b9 UID:887e9be5-d26a-4e20-b993-18af783818c7 NetNS:/var/run/netns/48dc90d7-2e58-450e-a3bb-9eb5ba3b9c21 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400282ad70}] Aliases:map[]}"
	Nov 01 08:33:04 addons-377223 crio[830]: time="2025-11-01T08:33:04.849969989Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:33:04 addons-377223 crio[830]: time="2025-11-01T08:33:04.878397163Z" level=info msg="Stopped pod sandbox: be967e33f2d3e188bf253464c4f9723d8f9d37fa329f609f05eca322634d45b9" id=8f0dcbb5-1935-4cda-92e2-9561cec1b253 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.572384352Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9/POD" id=e0045626-9156-4016-8e52-1ccfec5e861a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.572452994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.595993199Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 Namespace:local-path-storage ID:e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea UID:aac8705b-d25d-43b9-b634-2c83bbaeddec NetNS:/var/run/netns/89cc351f-3a1d-409b-ab99-2eed73467e73 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000da488}] Aliases:map[]}"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.596041845Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.609758889Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 Namespace:local-path-storage ID:e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea UID:aac8705b-d25d-43b9-b634-2c83bbaeddec NetNS:/var/run/netns/89cc351f-3a1d-409b-ab99-2eed73467e73 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000da488}] Aliases:map[]}"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.609897675Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 for CNI network kindnet (type=ptp)"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.622111452Z" level=info msg="Ran pod sandbox e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea with infra container: local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9/POD" id=e0045626-9156-4016-8e52-1ccfec5e861a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.623286367Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=62ed1ca0-0cf8-4d34-a54a-8704b4dbf482 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.6296055Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=7867a93f-dd7e-4e15-9e8f-d58c1b326e9e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.645780005Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9/helper-pod" id=18670967-e4fb-4f1d-bac7-d6be48f4f3fb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.645895506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.65299649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.653679719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.676506016Z" level=info msg="Created container a7f0b80826c62eac785f283b3614c463420f5a55227f7b252380322935d2dcfe: local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9/helper-pod" id=18670967-e4fb-4f1d-bac7-d6be48f4f3fb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.680656327Z" level=info msg="Starting container: a7f0b80826c62eac785f283b3614c463420f5a55227f7b252380322935d2dcfe" id=baca7dbd-fa13-40b6-8f72-b423f4b85142 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:33:06 addons-377223 crio[830]: time="2025-11-01T08:33:06.686342863Z" level=info msg="Started container" PID=5546 containerID=a7f0b80826c62eac785f283b3614c463420f5a55227f7b252380322935d2dcfe description=local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9/helper-pod id=baca7dbd-fa13-40b6-8f72-b423f4b85142 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea
	Nov 01 08:33:07 addons-377223 crio[830]: time="2025-11-01T08:33:07.865444753Z" level=info msg="Stopping pod sandbox: e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea" id=a9bb29e4-5463-46bd-bb43-9d6c7c3f2382 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:33:07 addons-377223 crio[830]: time="2025-11-01T08:33:07.865694452Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 Namespace:local-path-storage ID:e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea UID:aac8705b-d25d-43b9-b634-2c83bbaeddec NetNS:/var/run/netns/89cc351f-3a1d-409b-ab99-2eed73467e73 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000db318}] Aliases:map[]}"
	Nov 01 08:33:07 addons-377223 crio[830]: time="2025-11-01T08:33:07.865834953Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9 from CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:33:07 addons-377223 crio[830]: time="2025-11-01T08:33:07.897056826Z" level=info msg="Stopped pod sandbox: e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea" id=a9bb29e4-5463-46bd-bb43-9d6c7c3f2382 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	a7f0b80826c62       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   e86e9bf1a1460       helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9   local-path-storage
	9ab73c5f83caf       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   be967e33f2d3e       test-local-path                                              default
	e507cf132eb98       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            9 seconds ago        Exited              helper-pod                               0                   63e299f5a1594       helper-pod-create-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9   local-path-storage
	1075cdd73f4ae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   6d145ff483fdd       busybox                                                      default
	04681fc01736e       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             39 seconds ago       Running             controller                               0                   c4eab738759ff       ingress-nginx-controller-675c5ddd98-rjv49                    ingress-nginx
	13678f17060df       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 46 seconds ago       Running             gcp-auth                                 0                   16e766bc2f8a4       gcp-auth-78565c9fb4-sf5ck                                    gcp-auth
	414b5bc39c329       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	061ec86ab4df3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	1e44c8f5f77ec       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	21f927e7d6330       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    2                   01570eefbef7b       ingress-nginx-admission-patch-4j6nj                          ingress-nginx
	6308511f21c78       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	f27ab360ec078       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	eb2875a12fdc4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            About a minute ago   Running             gadget                                   0                   e6b83ed4311c6       gadget-d7mfz                                                 gadget
	8b2ec503607da       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   28773f98ddd3a       ingress-nginx-admission-create-94rkz                         ingress-nginx
	c1d7577e892ad       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   1c24bf0a4a833       csi-hostpathplugin-9rxph                                     kube-system
	a3686c57573f9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   09437cdc7b959       nvidia-device-plugin-daemonset-nh42v                         kube-system
	a1310f21f82f3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   0cd4c987e9adc       yakd-dashboard-5ff678cb9-tvcmp                               yakd-dashboard
	0603dc6c6335f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   ff3ea635e405f       snapshot-controller-7d9fbc56b8-xjs28                         kube-system
	d0048c30bd262       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   7bab9d4fefec0       metrics-server-85b7d694d7-w9zzf                              kube-system
	0881184118c48       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   380800881ce46       registry-proxy-ntzvs                                         kube-system
	4f21a033f7625       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   a808be4c410fc       csi-hostpath-resizer-0                                       kube-system
	5d0f635d3192a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   8dfccd45f6135       csi-hostpath-attacher-0                                      kube-system
	8702acd353295       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   f9c98ac165c91       cloud-spanner-emulator-86bd5cbb97-jw2x4                      default
	058fd3f4c2519       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   8034af4362de2       snapshot-controller-7d9fbc56b8-jjjfk                         kube-system
	4a1acf727ae09       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   1b7e3092b591f       local-path-provisioner-648f6765c9-bsvzp                      local-path-storage
	f4379003f8bbb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   c48abf0f71163       kube-ingress-dns-minikube                                    kube-system
	8208bb01eece1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   dbd4bc3491894       registry-6b586f9694-hgg7l                                    kube-system
	3c3aa06bb4ba0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   6fd4e04dbf2e2       coredns-66bc5c9577-jfpff                                     kube-system
	b7a004a1dd4c8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   353b833ce8055       storage-provisioner                                          kube-system
	07263ae55437d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   e67163b2b4436       kube-proxy-8p9ks                                             kube-system
	5931a7ff4389c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   fc0b70acda8dd       kindnet-g288l                                                kube-system
	fae02c07e9b59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   eb9c52a781516       etcd-addons-377223                                           kube-system
	8a52242ff83bb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   57a10dbdcecf8       kube-scheduler-addons-377223                                 kube-system
	2567a3a7bafb7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   9f2906ce1eb6b       kube-apiserver-addons-377223                                 kube-system
	8b0193372487b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   aeb81ee14f38a       kube-controller-manager-addons-377223                        kube-system
	
	
	==> coredns [3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481] <==
	[INFO] 10.244.0.6:59433 - 39859 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00221943s
	[INFO] 10.244.0.6:59433 - 46731 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000187991s
	[INFO] 10.244.0.6:59433 - 56515 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000210439s
	[INFO] 10.244.0.6:42079 - 43892 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140435s
	[INFO] 10.244.0.6:42079 - 43646 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175675s
	[INFO] 10.244.0.6:41186 - 41103 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119446s
	[INFO] 10.244.0.6:41186 - 40923 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151077s
	[INFO] 10.244.0.6:57431 - 4353 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134363s
	[INFO] 10.244.0.6:57431 - 4165 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078349s
	[INFO] 10.244.0.6:34764 - 35060 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00124231s
	[INFO] 10.244.0.6:34764 - 34623 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00123376s
	[INFO] 10.244.0.6:39071 - 62561 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138851s
	[INFO] 10.244.0.6:39071 - 62422 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073573s
	[INFO] 10.244.0.21:37428 - 33830 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142626s
	[INFO] 10.244.0.21:45720 - 55985 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000093174s
	[INFO] 10.244.0.21:57279 - 39455 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079603s
	[INFO] 10.244.0.21:44995 - 669 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076765s
	[INFO] 10.244.0.21:40690 - 16846 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085052s
	[INFO] 10.244.0.21:44949 - 9077 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075862s
	[INFO] 10.244.0.21:56203 - 10615 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002281927s
	[INFO] 10.244.0.21:45961 - 54463 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001896625s
	[INFO] 10.244.0.21:58462 - 13847 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0005637s
	[INFO] 10.244.0.21:54563 - 55058 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001455948s
	[INFO] 10.244.0.23:59358 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000209774s
	[INFO] 10.244.0.23:34632 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017666s
	
	
	==> describe nodes <==
	Name:               addons-377223
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=addons-377223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_30_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377223
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377223"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377223
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:33:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:30:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:31:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0e5ca097-7497-4b5a-acf6-0c7438d075b8
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-86bd5cbb97-jw2x4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  gadget                      gadget-d7mfz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-78565c9fb4-sf5ck                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-rjv49    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m29s
	  kube-system                 coredns-66bc5c9577-jfpff                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m35s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 csi-hostpathplugin-9rxph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 etcd-addons-377223                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m40s
	  kube-system                 kindnet-g288l                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m35s
	  kube-system                 kube-apiserver-addons-377223                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-controller-manager-addons-377223        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-8p9ks                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-addons-377223                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 metrics-server-85b7d694d7-w9zzf              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m30s
	  kube-system                 nvidia-device-plugin-daemonset-nh42v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-6b586f9694-hgg7l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 registry-creds-764b6fb674-jr4nd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 registry-proxy-ntzvs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 snapshot-controller-7d9fbc56b8-jjjfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-xjs28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  local-path-storage          local-path-provisioner-648f6765c9-bsvzp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tvcmp               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m33s  kube-proxy       
	  Normal   Starting                 2m40s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s  kubelet          Node addons-377223 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s  kubelet          Node addons-377223 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s  kubelet          Node addons-377223 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m36s  node-controller  Node addons-377223 event: Registered Node addons-377223 in Controller
	  Normal   NodeReady                113s   kubelet          Node addons-377223 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:04] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:06] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:08] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:09] overlayfs: idmapped layers are currently not supported
	[ +41.926823] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:10] overlayfs: idmapped layers are currently not supported
	[ +39.688208] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:13] overlayfs: idmapped layers are currently not supported
	[ +17.643407] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:15] overlayfs: idmapped layers are currently not supported
	[ +15.590074] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:16] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:17] overlayfs: idmapped layers are currently not supported
	[ +25.755276] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:18] overlayfs: idmapped layers are currently not supported
	[  +9.757193] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:23] overlayfs: idmapped layers are currently not supported
	[  +4.855106] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 08:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7] <==
	{"level":"warn","ts":"2025-11-01T08:30:24.369770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.387794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.436684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.442384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.458756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.474573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.505297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.509285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.531422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.544859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.564625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.576560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.597307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.608507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.629642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.658253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.672613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.697565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:24.767469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:40.744875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:40.760941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.701978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.722923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.742759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:31:02.760376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [13678f17060dfe64038fa4572b67317d5cf2008f763b3e94c774fa0d7c88d0b2] <==
	2025/11/01 08:32:22 GCP Auth Webhook started!
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:35 Ready to marshal response ...
	2025/11/01 08:32:35 Ready to write response ...
	2025/11/01 08:32:55 Ready to marshal response ...
	2025/11/01 08:32:55 Ready to write response ...
	2025/11/01 08:32:57 Ready to marshal response ...
	2025/11/01 08:32:57 Ready to write response ...
	2025/11/01 08:32:57 Ready to marshal response ...
	2025/11/01 08:32:57 Ready to write response ...
	2025/11/01 08:33:06 Ready to marshal response ...
	2025/11/01 08:33:06 Ready to write response ...
	2025/11/01 08:33:08 Ready to marshal response ...
	2025/11/01 08:33:08 Ready to write response ...
	
	
	==> kernel <==
	 08:33:08 up 17:15,  0 user,  load average: 1.30, 1.81, 2.38
	Linux addons-377223 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea] <==
	I1101 08:31:06.357310       1 controller.go:711] "Syncing nftables rules"
	I1101 08:31:14.761273       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:14.761335       1 main.go:301] handling current node
	I1101 08:31:24.757473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:24.757562       1 main.go:301] handling current node
	I1101 08:31:34.755217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:34.755341       1 main.go:301] handling current node
	I1101 08:31:44.754240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:44.754301       1 main.go:301] handling current node
	I1101 08:31:54.754733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:54.754765       1 main.go:301] handling current node
	I1101 08:32:04.755106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:04.755149       1 main.go:301] handling current node
	I1101 08:32:14.760733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:14.760847       1 main.go:301] handling current node
	I1101 08:32:24.754082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:24.754113       1 main.go:301] handling current node
	I1101 08:32:34.756013       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:34.756270       1 main.go:301] handling current node
	I1101 08:32:44.755956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:44.756120       1 main.go:301] handling current node
	I1101 08:32:54.760647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:54.760680       1 main.go:301] handling current node
	I1101 08:33:04.755125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:04.755159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0] <==
	W1101 08:31:15.146528       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.43.129:443: connect: connection refused
	E1101 08:31:15.146685       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.43.129:443: connect: connection refused" logger="UnhandledError"
	W1101 08:31:15.146840       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.43.129:443: connect: connection refused
	E1101 08:31:15.146878       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.43.129:443: connect: connection refused" logger="UnhandledError"
	W1101 08:31:15.264266       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.43.129:443: connect: connection refused
	E1101 08:31:15.264307       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.43.129:443: connect: connection refused" logger="UnhandledError"
	W1101 08:31:38.794656       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:38.794692       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:31:38.794705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:31:38.795871       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:38.795952       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:31:38.795962       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:31:52.502840       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:31:52.502913       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:31:52.503091       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.124.221:443: connect: connection refused" logger="UnhandledError"
	E1101 08:31:52.504475       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.124.221:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.124.221:443: connect: connection refused" logger="UnhandledError"
	I1101 08:31:52.582159       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:32:45.236744       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52080: use of closed network connection
	E1101 08:32:45.415580       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52092: use of closed network connection
	
	
	==> kube-controller-manager [8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89] <==
	I1101 08:30:32.725456       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-377223"
	I1101 08:30:32.725567       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 08:30:32.725618       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 08:30:32.725943       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 08:30:32.726222       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 08:30:32.726409       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 08:30:32.726639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 08:30:32.727514       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 08:30:32.727715       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 08:30:32.728749       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:30:32.731033       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 08:30:32.731130       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 08:30:32.733509       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 08:30:32.744486       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	E1101 08:30:38.000977       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 08:31:02.695063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:31:02.695219       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:31:02.695282       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:31:02.713645       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:31:02.718340       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:31:02.796281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:31:02.819932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:31:17.732363       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1101 08:31:32.801509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:31:32.829206       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0] <==
	I1101 08:30:34.621317       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:30:34.709737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:30:34.810560       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:30:34.810590       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:30:34.810664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:30:34.868286       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:30:34.868340       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:30:34.875282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:30:34.875581       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:30:34.875595       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:34.877463       1 config.go:200] "Starting service config controller"
	I1101 08:30:34.877473       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:30:34.877505       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:30:34.877509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:30:34.877519       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:30:34.877523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:30:34.878115       1 config.go:309] "Starting node config controller"
	I1101 08:30:34.878122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:30:34.878138       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:30:34.983596       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:30:34.983664       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:30:34.983919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2] <==
	I1101 08:30:26.818023       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:26.820046       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:30:26.820085       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:30:26.820847       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 08:30:26.820924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 08:30:26.826268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:30:26.826443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 08:30:26.829285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:30:26.829423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:30:26.829739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:30:26.830267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:30:26.834194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:30:26.834359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:30:26.834425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:30:26.834532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:30:26.834589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:30:26.834621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:30:26.834656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:30:26.834703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:30:26.834731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:30:26.834765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:30:26.834866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:30:26.834899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:30:26.834949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 08:30:27.920518       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:33:06 addons-377223 kubelet[1283]: I1101 08:33:06.343745    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/aac8705b-d25d-43b9-b634-2c83bbaeddec-script\") pod \"helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") " pod="local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9"
	Nov 01 08:33:06 addons-377223 kubelet[1283]: I1101 08:33:06.343796    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-gcp-creds\") pod \"helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") " pod="local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9"
	Nov 01 08:33:06 addons-377223 kubelet[1283]: I1101 08:33:06.343833    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzcq\" (UniqueName: \"kubernetes.io/projected/aac8705b-d25d-43b9-b634-2c83bbaeddec-kube-api-access-dvzcq\") pod \"helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") " pod="local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9"
	Nov 01 08:33:06 addons-377223 kubelet[1283]: W1101 08:33:06.618838    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/crio-e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea WatchSource:0}: Error finding container e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea: Status 404 returned error can't find the container with id e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.068297    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ntzvs" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959034    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/aac8705b-d25d-43b9-b634-2c83bbaeddec-script\") pod \"aac8705b-d25d-43b9-b634-2c83bbaeddec\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") "
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959084    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-data\") pod \"aac8705b-d25d-43b9-b634-2c83bbaeddec\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") "
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959100    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-gcp-creds\") pod \"aac8705b-d25d-43b9-b634-2c83bbaeddec\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") "
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959125    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzcq\" (UniqueName: \"kubernetes.io/projected/aac8705b-d25d-43b9-b634-2c83bbaeddec-kube-api-access-dvzcq\") pod \"aac8705b-d25d-43b9-b634-2c83bbaeddec\" (UID: \"aac8705b-d25d-43b9-b634-2c83bbaeddec\") "
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959603    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-data" (OuterVolumeSpecName: "data") pod "aac8705b-d25d-43b9-b634-2c83bbaeddec" (UID: "aac8705b-d25d-43b9-b634-2c83bbaeddec"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.959979    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac8705b-d25d-43b9-b634-2c83bbaeddec-script" (OuterVolumeSpecName: "script") pod "aac8705b-d25d-43b9-b634-2c83bbaeddec" (UID: "aac8705b-d25d-43b9-b634-2c83bbaeddec"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.960021    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "aac8705b-d25d-43b9-b634-2c83bbaeddec" (UID: "aac8705b-d25d-43b9-b634-2c83bbaeddec"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 08:33:07 addons-377223 kubelet[1283]: I1101 08:33:07.966244    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac8705b-d25d-43b9-b634-2c83bbaeddec-kube-api-access-dvzcq" (OuterVolumeSpecName: "kube-api-access-dvzcq") pod "aac8705b-d25d-43b9-b634-2c83bbaeddec" (UID: "aac8705b-d25d-43b9-b634-2c83bbaeddec"). InnerVolumeSpecName "kube-api-access-dvzcq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.059640    1283 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/aac8705b-d25d-43b9-b634-2c83bbaeddec-script\") on node \"addons-377223\" DevicePath \"\""
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.059701    1283 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-data\") on node \"addons-377223\" DevicePath \"\""
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.059718    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aac8705b-d25d-43b9-b634-2c83bbaeddec-gcp-creds\") on node \"addons-377223\" DevicePath \"\""
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.059729    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dvzcq\" (UniqueName: \"kubernetes.io/projected/aac8705b-d25d-43b9-b634-2c83bbaeddec-kube-api-access-dvzcq\") on node \"addons-377223\" DevicePath \"\""
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.073191    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887e9be5-d26a-4e20-b993-18af783818c7" path="/var/lib/kubelet/pods/887e9be5-d26a-4e20-b993-18af783818c7/volumes"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.668214    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmc5\" (UniqueName: \"kubernetes.io/projected/5f1a8f82-db98-482b-b1fd-9b6614922fa9-kube-api-access-kqmc5\") pod \"task-pv-pod\" (UID: \"5f1a8f82-db98-482b-b1fd-9b6614922fa9\") " pod="default/task-pv-pod"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.668280    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-364b1efa-5ca7-4443-b4e5-9a80d9c1aa09\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^645c5f13-b6fd-11f0-ba54-963769a4c647\") pod \"task-pv-pod\" (UID: \"5f1a8f82-db98-482b-b1fd-9b6614922fa9\") " pod="default/task-pv-pod"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.668349    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f1a8f82-db98-482b-b1fd-9b6614922fa9-gcp-creds\") pod \"task-pv-pod\" (UID: \"5f1a8f82-db98-482b-b1fd-9b6614922fa9\") " pod="default/task-pv-pod"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.800948    1283 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-364b1efa-5ca7-4443-b4e5-9a80d9c1aa09\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^645c5f13-b6fd-11f0-ba54-963769a4c647\") pod \"task-pv-pod\" (UID: \"5f1a8f82-db98-482b-b1fd-9b6614922fa9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/25d940cdc76b0cfb32702d291777e599f1ca1824e5a2c90af596baa124c53a36/globalmount\"" pod="default/task-pv-pod"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: I1101 08:33:08.871117    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86e9bf1a1460414614eccdf10e35cdf7f20581ef99de6b4ee27a87a7e499fea"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: E1101 08:33:08.872783    1283 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9\" is forbidden: User \"system:node:addons-377223\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-377223' and this object" podUID="aac8705b-d25d-43b9-b634-2c83bbaeddec" pod="local-path-storage/helper-pod-delete-pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9"
	Nov 01 08:33:08 addons-377223 kubelet[1283]: W1101 08:33:08.894410    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6884fdaa9d12b8ac05ab8c27110a73f94e382dc819395576a961daa9562f8a7c/crio-2870f939c2ece8708d151c8dcb84d7486d15e5fd3100bd837e6eff1fba107e7c WatchSource:0}: Error finding container 2870f939c2ece8708d151c8dcb84d7486d15e5fd3100bd837e6eff1fba107e7c: Status 404 returned error can't find the container with id 2870f939c2ece8708d151c8dcb84d7486d15e5fd3100bd837e6eff1fba107e7c
	
	
	==> storage-provisioner [b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69] <==
	W1101 08:32:44.618375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:46.622122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:46.626363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:48.630578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:48.635936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:50.639545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:50.644209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:52.647095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:52.653448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:54.656831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:54.661144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:56.664797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:56.671281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:58.675408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:32:58.681382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:00.684016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:00.690547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:02.693823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:02.700746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:04.703705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:04.708259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:06.715518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:06.724346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:08.729796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:33:08.744013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377223 -n addons-377223
helpers_test.go:269: (dbg) Run:  kubectl --context addons-377223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj registry-creds-764b6fb674-jr4nd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-377223 describe pod task-pv-pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj registry-creds-764b6fb674-jr4nd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-377223 describe pod task-pv-pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj registry-creds-764b6fb674-jr4nd: exit status 1 (115.586328ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-377223/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:33:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqmc5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-kqmc5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/task-pv-pod to addons-377223
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-94rkz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4j6nj" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-jr4nd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-377223 describe pod task-pv-pod ingress-nginx-admission-create-94rkz ingress-nginx-admission-patch-4j6nj registry-creds-764b6fb674-jr4nd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable headlamp --alsologtostderr -v=1: exit status 11 (337.401632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:10.445614 2324190 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:10.447332 2324190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:10.447379 2324190 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:10.447402 2324190 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:10.447732 2324190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:10.448151 2324190 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:10.448764 2324190 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:10.448820 2324190 addons.go:607] checking whether the cluster is paused
	I1101 08:33:10.448976 2324190 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:10.449020 2324190 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:10.449522 2324190 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:10.469659 2324190 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:10.469716 2324190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:10.490263 2324190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:10.598426 2324190 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:10.598507 2324190 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:10.652960 2324190 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:10.652979 2324190 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:10.652984 2324190 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:10.652988 2324190 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:10.652991 2324190 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:10.652995 2324190 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:10.652998 2324190 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:10.653001 2324190 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:10.653004 2324190 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:10.653012 2324190 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:10.653015 2324190 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:10.653018 2324190 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:10.653021 2324190 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:10.653024 2324190 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:10.653028 2324190 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:10.653033 2324190 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:10.653037 2324190 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:10.653041 2324190 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:10.653086 2324190 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:10.653091 2324190 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:10.653104 2324190 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:10.653107 2324190 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:10.653116 2324190 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:10.653119 2324190 cri.go:89] found id: ""
	I1101 08:33:10.653170 2324190 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:10.674575 2324190 out.go:203] 
	W1101 08:33:10.678392 2324190 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:10.678466 2324190 out.go:285] * 
	* 
	W1101 08:33:10.692350 2324190 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:10.696350 2324190 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (4.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-jw2x4" [8dfb0bf0-13f5-424b-8d36-008c32782685] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003617923s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (396.642425ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:06.671191 2323554 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:06.674729 2323554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.674757 2323554 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:06.674765 2323554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.675102 2323554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:06.675474 2323554 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:06.677310 2323554 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.677369 2323554 addons.go:607] checking whether the cluster is paused
	I1101 08:33:06.677528 2323554 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.677565 2323554 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:06.678064 2323554 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:06.716242 2323554 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:06.716298 2323554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:06.737471 2323554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:06.846119 2323554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:06.846225 2323554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:06.889693 2323554 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:06.889712 2323554 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:06.889718 2323554 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:06.889722 2323554 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:06.889725 2323554 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:06.889730 2323554 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:06.889733 2323554 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:06.889737 2323554 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:06.889740 2323554 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:06.889761 2323554 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:06.889769 2323554 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:06.889772 2323554 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:06.889775 2323554 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:06.889778 2323554 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:06.889782 2323554 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:06.889786 2323554 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:06.889795 2323554 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:06.889799 2323554 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:06.889802 2323554 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:06.889805 2323554 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:06.889809 2323554 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:06.889812 2323554 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:06.889815 2323554 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:06.889819 2323554 cri.go:89] found id: ""
	I1101 08:33:06.889861 2323554 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:06.912961 2323554 out.go:203] 
	W1101 08:33:06.915953 2323554 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:06.915982 2323554 out.go:285] * 
	* 
	W1101 08:33:06.928864 2323554 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:06.932195 2323554 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.39s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-377223 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-377223 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [887e9be5-d26a-4e20-b993-18af783818c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [887e9be5-d26a-4e20-b993-18af783818c7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [887e9be5-d26a-4e20-b993-18af783818c7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003504766s
addons_test.go:967: (dbg) Run:  kubectl --context addons-377223 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 ssh "cat /opt/local-path-provisioner/pvc-b9d8d8a4-42f3-4d56-9455-13fa291567c9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-377223 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-377223 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (284.560232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:33:06.373555 2323516 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:33:06.375078 2323516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.375091 2323516 out.go:374] Setting ErrFile to fd 2...
	I1101 08:33:06.375096 2323516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:33:06.375368 2323516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:33:06.375670 2323516 mustload.go:66] Loading cluster: addons-377223
	I1101 08:33:06.376084 2323516 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.376112 2323516 addons.go:607] checking whether the cluster is paused
	I1101 08:33:06.376217 2323516 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:33:06.376242 2323516 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:33:06.376718 2323516 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:33:06.393949 2323516 ssh_runner.go:195] Run: systemctl --version
	I1101 08:33:06.394001 2323516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:33:06.411578 2323516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:33:06.514512 2323516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:33:06.514615 2323516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:33:06.543971 2323516 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:33:06.544025 2323516 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:33:06.544044 2323516 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:33:06.544064 2323516 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:33:06.544082 2323516 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:33:06.544110 2323516 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:33:06.544126 2323516 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:33:06.544144 2323516 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:33:06.544162 2323516 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:33:06.544207 2323516 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:33:06.544227 2323516 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:33:06.544246 2323516 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:33:06.544281 2323516 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:33:06.544301 2323516 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:33:06.544320 2323516 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:33:06.544344 2323516 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:33:06.544400 2323516 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:33:06.544418 2323516 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:33:06.544422 2323516 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:33:06.544439 2323516 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:33:06.544445 2323516 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:33:06.544449 2323516 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:33:06.544452 2323516 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:33:06.544455 2323516 cri.go:89] found id: ""
	I1101 08:33:06.544514 2323516 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:33:06.561691 2323516 out.go:203] 
	W1101 08:33:06.564502 2323516 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:33:06.564522 2323516 out.go:285] * 
	* 
	W1101 08:33:06.584925 2323516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:33:06.595961 2323516 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.39s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-nh42v" [bf77424f-f0e0-41b0-9413-f5db070cde1b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00391528s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (257.883653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:57.006790 2323080 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:57.008299 2323080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:57.008372 2323080 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:57.008398 2323080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:57.008826 2323080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:32:57.009269 2323080 mustload.go:66] Loading cluster: addons-377223
	I1101 08:32:57.009763 2323080 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:57.009819 2323080 addons.go:607] checking whether the cluster is paused
	I1101 08:32:57.009974 2323080 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:57.010019 2323080 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:32:57.010552 2323080 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:32:57.028661 2323080 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:57.028730 2323080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:32:57.047250 2323080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:32:57.150450 2323080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:57.150540 2323080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:57.179059 2323080 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:32:57.179083 2323080 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:32:57.179087 2323080 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:32:57.179096 2323080 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:32:57.179100 2323080 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:32:57.179104 2323080 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:32:57.179107 2323080 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:32:57.179109 2323080 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:32:57.179113 2323080 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:32:57.179119 2323080 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:32:57.179123 2323080 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:32:57.179126 2323080 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:32:57.179129 2323080 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:32:57.179133 2323080 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:32:57.179137 2323080 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:32:57.179145 2323080 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:32:57.179149 2323080 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:32:57.179152 2323080 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:32:57.179155 2323080 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:32:57.179158 2323080 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:32:57.179163 2323080 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:32:57.179169 2323080 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:32:57.179172 2323080 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:32:57.179175 2323080 cri.go:89] found id: ""
	I1101 08:32:57.179250 2323080 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:57.194119 2323080 out.go:203] 
	W1101 08:32:57.197036 2323080 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:57.197057 2323080 out.go:285] * 
	* 
	W1101 08:32:57.208647 2323080 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:57.211836 2323080 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tvcmp" [c5511a38-58f1-4e4a-8295-066e63f0c602] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002772937s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-377223 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377223 addons disable yakd --alsologtostderr -v=1: exit status 11 (259.32103ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:51.744179 2322990 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:51.745697 2322990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:51.745710 2322990 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:51.745716 2322990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:51.745991 2322990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:32:51.746284 2322990 mustload.go:66] Loading cluster: addons-377223
	I1101 08:32:51.746686 2322990 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:51.746711 2322990 addons.go:607] checking whether the cluster is paused
	I1101 08:32:51.746816 2322990 config.go:182] Loaded profile config "addons-377223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:51.746835 2322990 host.go:66] Checking if "addons-377223" exists ...
	I1101 08:32:51.747273 2322990 cli_runner.go:164] Run: docker container inspect addons-377223 --format={{.State.Status}}
	I1101 08:32:51.764151 2322990 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:51.764234 2322990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377223
	I1101 08:32:51.780518 2322990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36055 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/addons-377223/id_rsa Username:docker}
	I1101 08:32:51.886023 2322990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:51.886102 2322990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:51.916107 2322990 cri.go:89] found id: "414b5bc39c329ea5379cc50b2f0931075b8101b78dc870b2b9a824bebf99ba8b"
	I1101 08:32:51.916126 2322990 cri.go:89] found id: "061ec86ab4df32357843abeec4767f4f1461ddcead4c7cf9d1492c198dcb3d3b"
	I1101 08:32:51.916131 2322990 cri.go:89] found id: "1e44c8f5f77ecb7acff0164f0a819e1059e60366883e9ce5725f335e263d6a55"
	I1101 08:32:51.916134 2322990 cri.go:89] found id: "6308511f21c7827239fbd03746b7074e42f38be4e4d6351dca4c35f1097133ef"
	I1101 08:32:51.916137 2322990 cri.go:89] found id: "f27ab360ec07837ab4e111f876b9abd1e2f28c700a55782e54fb5162221ed2b4"
	I1101 08:32:51.916141 2322990 cri.go:89] found id: "c1d7577e892adbb3f436f19e3b28d82a49f1cbfed6b8836c1ed6f86c65f16401"
	I1101 08:32:51.916144 2322990 cri.go:89] found id: "a3686c57573f9a7ed9871c19d746a5719c1d304d85f02afc10c29a8034b950eb"
	I1101 08:32:51.916147 2322990 cri.go:89] found id: "0603dc6c6335f97df7e85d9a14e859a49db2974a48e29156dd5264d896b4de45"
	I1101 08:32:51.916151 2322990 cri.go:89] found id: "d0048c30bd26213dfb453fa2bbd938c97e55fab6b53fc18bf545cdf3d996629d"
	I1101 08:32:51.916161 2322990 cri.go:89] found id: "0881184118c48ea6a57033511f480150827ad00b72255518f4d483725cab9f6c"
	I1101 08:32:51.916165 2322990 cri.go:89] found id: "4f21a033f7625d849deaefcdab250333db4bcf976055c2054e5820079f2d598e"
	I1101 08:32:51.916168 2322990 cri.go:89] found id: "5d0f635d3192a9e4f37b1f74942ca9a6d8846c5343e838584565abab0973a4b6"
	I1101 08:32:51.916171 2322990 cri.go:89] found id: "058fd3f4c2519a11447a33c3880fa2b1da6db273202e78739d3bb6bc56aafea3"
	I1101 08:32:51.916174 2322990 cri.go:89] found id: "f4379003f8bbbe0705cf7426f24a33ec6aaeb1b1f4fbd166749ec7eb68e28872"
	I1101 08:32:51.916176 2322990 cri.go:89] found id: "8208bb01eece1ad45ab18a4c4a3a0d21d53697dbf385e141bee5bd9ba3f5de1c"
	I1101 08:32:51.916181 2322990 cri.go:89] found id: "3c3aa06bb4ba09d56fe9add836fcacd57122f3975b1924a516b3f65b7dd51481"
	I1101 08:32:51.916184 2322990 cri.go:89] found id: "b7a004a1dd4c8a3998b83517cac0d350eff63e109d1288d34cf9bd98bd0dab69"
	I1101 08:32:51.916192 2322990 cri.go:89] found id: "07263ae55437dd8f877371c44f48f64a9062ae7d3979897f96b212a18ebf56d0"
	I1101 08:32:51.916195 2322990 cri.go:89] found id: "5931a7ff4389c4f1514bfe1a6d1b0c5c1f689a7388238437090ed28390f210ea"
	I1101 08:32:51.916198 2322990 cri.go:89] found id: "fae02c07e9b59780efff42cf36c0cce0b725f4a0d809231656f5017f195aebe7"
	I1101 08:32:51.916202 2322990 cri.go:89] found id: "8a52242ff83bb2c360c37d00a820f361e325851ade8acc4cc79d3753a40747c2"
	I1101 08:32:51.916206 2322990 cri.go:89] found id: "2567a3a7bafb70b92331208292b9e993dda24d204dd0e1335895f63c557be7b0"
	I1101 08:32:51.916209 2322990 cri.go:89] found id: "8b0193372487bea326225079bf14bbd934e98d53cba7eaf50fc1bc3f324dcf89"
	I1101 08:32:51.916212 2322990 cri.go:89] found id: ""
	I1101 08:32:51.916260 2322990 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:51.930850 2322990 out.go:203] 
	W1101 08:32:51.933914 2322990 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:51.933937 2322990 out.go:285] * 
	* 
	W1101 08:32:51.945585 2322990 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:51.948640 2322990 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-377223 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-700813 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-700813 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wp8q9" [05c34bd6-9fb7-4c65-80b4-a895f26d58d6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1101 08:42:35.748237 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:43:03.453874 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:47:35.748154 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-700813 -n functional-700813
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 08:51:39.074722708 +0000 UTC m=+1336.063982621
functional_test.go:1645: (dbg) Run:  kubectl --context functional-700813 describe po hello-node-connect-7d85dfc575-wp8q9 -n default
functional_test.go:1645: (dbg) kubectl --context functional-700813 describe po hello-node-connect-7d85dfc575-wp8q9 -n default:
Name:             hello-node-connect-7d85dfc575-wp8q9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-700813/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:41:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xn9qg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xn9qg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wp8q9 to functional-700813
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-700813 logs hello-node-connect-7d85dfc575-wp8q9 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-700813 logs hello-node-connect-7d85dfc575-wp8q9 -n default: exit status 1 (88.614669ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wp8q9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-700813 logs hello-node-connect-7d85dfc575-wp8q9 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-700813 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-wp8q9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-700813/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:41:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xn9qg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xn9qg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wp8q9 to functional-700813
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-700813 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-700813 logs -l app=hello-node-connect: exit status 1 (86.705348ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wp8q9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-700813 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-700813 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.194.92
IPs:                      10.96.194.92
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30671/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-700813
helpers_test.go:243: (dbg) docker inspect functional-700813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da",
	        "Created": "2025-11-01T08:37:01.028466337Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2331633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:37:01.087822866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da/hostname",
	        "HostsPath": "/var/lib/docker/containers/47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da/hosts",
	        "LogPath": "/var/lib/docker/containers/47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da/47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da-json.log",
	        "Name": "/functional-700813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-700813:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-700813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47f395cd442b2ba9798903cd2bf57f19ec3660ef1599c6447aa2d5b94f98d0da",
	                "LowerDir": "/var/lib/docker/overlay2/baf6ce23318a9e6475f5480e8c7d27b0bcb03ee4b095aba9148a8d24da532613-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baf6ce23318a9e6475f5480e8c7d27b0bcb03ee4b095aba9148a8d24da532613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baf6ce23318a9e6475f5480e8c7d27b0bcb03ee4b095aba9148a8d24da532613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baf6ce23318a9e6475f5480e8c7d27b0bcb03ee4b095aba9148a8d24da532613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-700813",
	                "Source": "/var/lib/docker/volumes/functional-700813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-700813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-700813",
	                "name.minikube.sigs.k8s.io": "functional-700813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3e61da753bd4615db715ccc6bfc51de8dd6dbf35a341e3d49493427edd843e4",
	            "SandboxKey": "/var/run/docker/netns/e3e61da753bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36067"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-700813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:1b:df:8b:82:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c2373c54cddb1aa67900b536ff5b8bd44638ef6f6d012cf43741d41afc985aae",
	                    "EndpointID": "3bb3b5abc24d2109737f6edcc14c32bc71d50af0715720a09e813522081d3660",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-700813",
	                        "47f395cd442b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-700813 -n functional-700813
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 logs -n 25: (1.486713734s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ dashboard      │ --url --port 36195 -p functional-700813 --alsologtostderr -v=1                                         │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:41 UTC │ 01 Nov 25 08:41 UTC │
	│ addons         │ functional-700813 addons list                                                                          │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:41 UTC │ 01 Nov 25 08:41 UTC │
	│ addons         │ functional-700813 addons list -o json                                                                  │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:41 UTC │ 01 Nov 25 08:41 UTC │
	│ service        │ functional-700813 service list                                                                         │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ service        │ functional-700813 service list -o json                                                                 │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ service        │ functional-700813 service --namespace=default --https --url hello-node                                 │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │                     │
	│ service        │ functional-700813 service hello-node --url --format={{.IP}}                                            │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │                     │
	│ service        │ functional-700813 service hello-node --url                                                             │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │                     │
	│ ssh            │ functional-700813 ssh sudo cat /etc/ssl/certs/2315982.pem                                              │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /usr/share/ca-certificates/2315982.pem                                  │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /etc/ssl/certs/23159822.pem                                             │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /usr/share/ca-certificates/23159822.pem                                 │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh sudo cat /etc/test/nested/copy/2315982/hosts                                     │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ image          │ functional-700813 image ls --format short --alsologtostderr                                            │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ image          │ functional-700813 image ls --format yaml --alsologtostderr                                             │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ ssh            │ functional-700813 ssh pgrep buildkitd                                                                  │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │                     │
	│ image          │ functional-700813 image build -t localhost/my-image:functional-700813 testdata/build --alsologtostderr │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ image          │ functional-700813 image ls                                                                             │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ image          │ functional-700813 image ls --format json --alsologtostderr                                             │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ image          │ functional-700813 image ls --format table --alsologtostderr                                            │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ update-context │ functional-700813 update-context --alsologtostderr -v=2                                                │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ update-context │ functional-700813 update-context --alsologtostderr -v=2                                                │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	│ update-context │ functional-700813 update-context --alsologtostderr -v=2                                                │ functional-700813 │ jenkins │ v1.37.0 │ 01 Nov 25 08:50 UTC │ 01 Nov 25 08:50 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:40:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:40:53.842795 2341324 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:40:53.843199 2341324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.843213 2341324 out.go:374] Setting ErrFile to fd 2...
	I1101 08:40:53.843219 2341324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.843593 2341324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:40:53.844081 2341324 out.go:368] Setting JSON to false
	I1101 08:40:53.844986 2341324 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":62600,"bootTime":1761923854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:40:53.845177 2341324 start.go:143] virtualization:  
	I1101 08:40:53.848231 2341324 out.go:179] * [functional-700813] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:40:53.851992 2341324 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:40:53.852169 2341324 notify.go:221] Checking for updates...
	I1101 08:40:53.858035 2341324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:40:53.860943 2341324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:40:53.863816 2341324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:40:53.866687 2341324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:40:53.869646 2341324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:40:53.873725 2341324 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:40:53.874268 2341324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:40:53.899237 2341324 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:40:53.899373 2341324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:40:53.972758 2341324 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:40:53.963811711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:40:53.972870 2341324 docker.go:319] overlay module found
	I1101 08:40:53.975840 2341324 out.go:179] * Using the docker driver based on existing profile
	I1101 08:40:53.978781 2341324 start.go:309] selected driver: docker
	I1101 08:40:53.978801 2341324 start.go:930] validating driver "docker" against &{Name:functional-700813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-700813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:40:53.978933 2341324 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:40:53.979045 2341324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:40:54.035248 2341324 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:40:54.025295274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:40:54.035710 2341324 cni.go:84] Creating CNI manager for ""
	I1101 08:40:54.035765 2341324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:40:54.035818 2341324 start.go:353] cluster config:
	{Name:functional-700813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-700813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:40:54.039196 2341324 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 08:41:30 functional-700813 crio[3825]: time="2025-11-01T08:41:30.849849592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:41:30 functional-700813 crio[3825]: time="2025-11-01T08:41:30.865523139Z" level=info msg="Created container 4b396c654ae32f82c0974085878a1871026d1d02553d3396e0570106783b7233: default/sp-pod/myfrontend" id=8c9a5244-7f78-452e-932d-2795ddc93c34 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:41:30 functional-700813 crio[3825]: time="2025-11-01T08:41:30.868321747Z" level=info msg="Starting container: 4b396c654ae32f82c0974085878a1871026d1d02553d3396e0570106783b7233" id=954a2093-c4f3-49bc-a393-c1eff55bf299 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:41:30 functional-700813 crio[3825]: time="2025-11-01T08:41:30.871065701Z" level=info msg="Started container" PID=6542 containerID=4b396c654ae32f82c0974085878a1871026d1d02553d3396e0570106783b7233 description=default/sp-pod/myfrontend id=954a2093-c4f3-49bc-a393-c1eff55bf299 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9db4e827f6d826edbb82dadd1729628a28ab704d0584b24f12cd9a1b7fa54d2
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.906169088Z" level=info msg="Running pod sandbox: default/hello-node-connect-7d85dfc575-wp8q9/POD" id=4751bb0b-3950-478a-bbaf-9126e9ed6155 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.906242752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.912488958Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-wp8q9 Namespace:default ID:0484672fbfe7246c21070d2bff1b095fdca6ca612de19004013df7d489ca3bfc UID:05c34bd6-9fb7-4c65-80b4-a895f26d58d6 NetNS:/var/run/netns/91333c84-57d0-4917-84b1-e80efb3620f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002086490}] Aliases:map[]}"
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.912527677Z" level=info msg="Adding pod default_hello-node-connect-7d85dfc575-wp8q9 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.923258177Z" level=info msg="Got pod network &{Name:hello-node-connect-7d85dfc575-wp8q9 Namespace:default ID:0484672fbfe7246c21070d2bff1b095fdca6ca612de19004013df7d489ca3bfc UID:05c34bd6-9fb7-4c65-80b4-a895f26d58d6 NetNS:/var/run/netns/91333c84-57d0-4917-84b1-e80efb3620f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002086490}] Aliases:map[]}"
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.923405094Z" level=info msg="Checking pod default_hello-node-connect-7d85dfc575-wp8q9 for CNI network kindnet (type=ptp)"
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.92738208Z" level=info msg="Ran pod sandbox 0484672fbfe7246c21070d2bff1b095fdca6ca612de19004013df7d489ca3bfc with infra container: default/hello-node-connect-7d85dfc575-wp8q9/POD" id=4751bb0b-3950-478a-bbaf-9126e9ed6155 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:41:38 functional-700813 crio[3825]: time="2025-11-01T08:41:38.929025693Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fed9e708-a19a-4a30-a28a-aa619c7dd687 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:41:51 functional-700813 crio[3825]: time="2025-11-01T08:41:51.279687349Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6844276d-e170-4265-a3b0-b593f777cdc7 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:42:00 functional-700813 crio[3825]: time="2025-11-01T08:42:00.283215407Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4f523599-530b-4f0b-ad5f-1fedae58fad1 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:42:08 functional-700813 crio[3825]: time="2025-11-01T08:42:08.798240442Z" level=info msg="Stopping pod sandbox: 77e266c1b523e23267f280d5367846a940ffb1943dba557a6284513c880ea5ea" id=edd673c8-ba99-4066-84ee-082f56196d19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:42:08 functional-700813 crio[3825]: time="2025-11-01T08:42:08.798303259Z" level=info msg="Stopped pod sandbox (already stopped): 77e266c1b523e23267f280d5367846a940ffb1943dba557a6284513c880ea5ea" id=edd673c8-ba99-4066-84ee-082f56196d19 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:42:08 functional-700813 crio[3825]: time="2025-11-01T08:42:08.799104451Z" level=info msg="Removing pod sandbox: 77e266c1b523e23267f280d5367846a940ffb1943dba557a6284513c880ea5ea" id=b1eb64a5-ac1a-4ec2-8577-42f68c1f8e47 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 08:42:08 functional-700813 crio[3825]: time="2025-11-01T08:42:08.804099886Z" level=info msg="Removed pod sandbox: 77e266c1b523e23267f280d5367846a940ffb1943dba557a6284513c880ea5ea" id=b1eb64a5-ac1a-4ec2-8577-42f68c1f8e47 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 08:42:18 functional-700813 crio[3825]: time="2025-11-01T08:42:18.279635949Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d0c995a9-4d5f-4858-b5d6-9ed4dc1c2a43 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:43:11 functional-700813 crio[3825]: time="2025-11-01T08:43:11.279397433Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c527db65-65d2-4600-81e2-ea8dd54389e0 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:43:32 functional-700813 crio[3825]: time="2025-11-01T08:43:32.280845631Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=151fbc2a-7cf2-4783-aa1c-d4baeac77875 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:44:35 functional-700813 crio[3825]: time="2025-11-01T08:44:35.279620094Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=31cb4355-9a0b-426f-ad1f-e0e9c023f9cc name=/runtime.v1.ImageService/PullImage
	Nov 01 08:46:13 functional-700813 crio[3825]: time="2025-11-01T08:46:13.279232381Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9c73aa48-967d-465f-8def-73a035376a7c name=/runtime.v1.ImageService/PullImage
	Nov 01 08:47:22 functional-700813 crio[3825]: time="2025-11-01T08:47:22.279773846Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e5cd245-f4b3-4804-94cb-0a9aeb040c4b name=/runtime.v1.ImageService/PullImage
	Nov 01 08:51:16 functional-700813 crio[3825]: time="2025-11-01T08:51:16.280403066Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f7be7f54-644d-4a6e-98b7-7db77584ca4c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4b396c654ae32       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424                  10 minutes ago      Running             myfrontend                  0                   f9db4e827f6d8       sp-pod                                       default
	cf1e0e5624262       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   10 minutes ago      Running             dashboard-metrics-scraper   0                   4a6513f7b10e6       dashboard-metrics-scraper-77bf4d6c4c-l9mvd   kubernetes-dashboard
	811568a70c364       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         10 minutes ago      Running             kubernetes-dashboard        0                   0174dc2efe466       kubernetes-dashboard-855c9754f9-sjl8s        kubernetes-dashboard
	680065089e0a2       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                  10 minutes ago      Running             nginx                       0                   350fa3b8e92f6       nginx-svc                                    default
	5fd70fd4e66d0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              10 minutes ago      Exited              mount-munger                0                   c4f942e10fecc       busybox-mount                                default
	c50e3d7f65a0e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     3                   95a810ad2cdfa       coredns-66bc5c9577-btwz7                     kube-system
	099846f24d116       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 3                   37b3fe61b5800       kindnet-xxz2r                                kube-system
	1e09a82df1d99       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  3                   93ab5e8343d53       kube-proxy-lb5tf                             kube-system
	1a460e8626e92       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         4                   b8efb48c4980d       storage-provisioner                          kube-system
	3fe1bad278057       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   8635597261cec       kube-apiserver-functional-700813             kube-system
	6e49e7ea7b279       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              3                   1ca3b2edea05d       kube-scheduler-functional-700813             kube-system
	9059a93be28ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     4                   5a443b2852bb2       kube-controller-manager-functional-700813    kube-system
	7ca45f162453d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        3                   e8fca3c640d1b       etcd-functional-700813                       kube-system
	0593f9a73d48a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         3                   b8efb48c4980d       storage-provisioner                          kube-system
	b66b1d8d0a4a9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 12 minutes ago      Exited              kube-controller-manager     3                   5a443b2852bb2       kube-controller-manager-functional-700813    kube-system
	e570e8a267ed7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 12 minutes ago      Exited              etcd                        2                   e8fca3c640d1b       etcd-functional-700813                       kube-system
	ffe45bbb6cf2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 12 minutes ago      Exited              coredns                     2                   95a810ad2cdfa       coredns-66bc5c9577-btwz7                     kube-system
	c98087a8f7b1d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 12 minutes ago      Exited              kindnet-cni                 2                   37b3fe61b5800       kindnet-xxz2r                                kube-system
	7ca671fdb4b92       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 12 minutes ago      Exited              kube-proxy                  2                   93ab5e8343d53       kube-proxy-lb5tf                             kube-system
	758a9b79ed066       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 13 minutes ago      Exited              kube-scheduler              2                   1ca3b2edea05d       kube-scheduler-functional-700813             kube-system
	
	
	==> coredns [c50e3d7f65a0e4ce3e78c53ee52862393a95bb0129636ffe1db9733c08bef3cf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41256 - 39942 "HINFO IN 8908029713986260785.4048736482657886579. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02450403s
	
	
	==> coredns [ffe45bbb6cf2e8b5ef3f762c4ccf8a3f1d2840a5c540fc5a43b19be91deb4e53] <==
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47433 - 56656 "HINFO IN 8307420672414739909.7290420385006787940. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02249172s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-700813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-700813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=functional-700813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_37_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:37:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-700813
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:51:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:51:14 +0000   Sat, 01 Nov 2025 08:37:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:51:14 +0000   Sat, 01 Nov 2025 08:37:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:51:14 +0000   Sat, 01 Nov 2025 08:37:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:51:14 +0000   Sat, 01 Nov 2025 08:38:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-700813
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fb39a774-663f-419d-84ae-49f785acd717
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bkh8t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-wp8q9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-btwz7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-functional-700813                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-xxz2r                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-functional-700813              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-700813     200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-lb5tf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-functional-700813              100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-l9mvd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sjl8s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node functional-700813 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node functional-700813 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node functional-700813 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           14m                node-controller  Node functional-700813 event: Registered Node functional-700813 in Controller
	  Normal   NodeReady                13m                kubelet          Node functional-700813 status is now: NodeReady
	  Warning  ContainerGCFailed        13m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           12m                node-controller  Node functional-700813 event: Registered Node functional-700813 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-700813 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-700813 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-700813 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-700813 event: Registered Node functional-700813 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:08] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:09] overlayfs: idmapped layers are currently not supported
	[ +41.926823] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:10] overlayfs: idmapped layers are currently not supported
	[ +39.688208] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:13] overlayfs: idmapped layers are currently not supported
	[ +17.643407] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:15] overlayfs: idmapped layers are currently not supported
	[ +15.590074] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:16] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:17] overlayfs: idmapped layers are currently not supported
	[ +25.755276] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:18] overlayfs: idmapped layers are currently not supported
	[  +9.757193] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:21] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:22] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:23] overlayfs: idmapped layers are currently not supported
	[  +4.855106] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:28] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 08:30] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7ca45f162453dffe20c33d075e04fbc963d2d0d67a07770bb273085c1bba2819] <==
	{"level":"warn","ts":"2025-11-01T08:40:08.928281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:08.950268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:08.992087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:08.997370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.015229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.031230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.075902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.096991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.112571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.129069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.155789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.168518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.187953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.228391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.249351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.273885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.274700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.322340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.356205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.377027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.389181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:40:09.447938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:50:08.082641Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1241}
	{"level":"info","ts":"2025-11-01T08:50:08.106848Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1241,"took":"23.895656ms","hash":3029685348,"current-db-size-bytes":3620864,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-11-01T08:50:08.106900Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3029685348,"revision":1241,"compact-revision":-1}
	
	
	==> etcd [e570e8a267ed7362a959ef20fea3192559af5f9c4d2a4d94e595b27ccc87c62f] <==
	{"level":"warn","ts":"2025-11-01T08:39:20.463757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.478949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.501318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.526555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.542344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.554913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:39:20.605046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:39:54.044058Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T08:39:54.044108Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-700813","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T08:39:54.044211Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T08:39:54.184404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T08:39:54.185841Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T08:39:54.185884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T08:39:54.185937Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T08:39:54.185947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T08:39:54.185923Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-01T08:39:54.186018Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T08:39:54.186034Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T08:39:54.186041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T08:39:54.186059Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T08:39:54.186069Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T08:39:54.189758Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T08:39:54.189836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T08:39:54.189874Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T08:39:54.189881Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-700813","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:51:40 up 17:34,  0 user,  load average: 0.18, 0.43, 1.12
	Linux functional-700813 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [099846f24d11609cfeee9ca5a276238c4fc1ba5faf4da3438498e819ee1d2c26] <==
	I1101 08:49:31.960089       1 main.go:301] handling current node
	I1101 08:49:41.955791       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:49:41.955927       1 main.go:301] handling current node
	I1101 08:49:51.954396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:49:51.954504       1 main.go:301] handling current node
	I1101 08:50:01.955033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:01.955386       1 main.go:301] handling current node
	I1101 08:50:11.961694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:11.961793       1 main.go:301] handling current node
	I1101 08:50:21.957532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:21.957635       1 main.go:301] handling current node
	I1101 08:50:31.958501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:31.958596       1 main.go:301] handling current node
	I1101 08:50:41.960147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:41.960176       1 main.go:301] handling current node
	I1101 08:50:51.962082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:50:51.962198       1 main.go:301] handling current node
	I1101 08:51:01.958370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:51:01.958479       1 main.go:301] handling current node
	I1101 08:51:11.956058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:51:11.956157       1 main.go:301] handling current node
	I1101 08:51:21.961141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:51:21.961179       1 main.go:301] handling current node
	I1101 08:51:31.956010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:51:31.956120       1 main.go:301] handling current node
	
	
	==> kindnet [c98087a8f7b1d121cff6317249bd04a939cee53d6a830c62bb9ab971a25de0cb] <==
	E1101 08:38:55.913153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44358->10.96.0.1:443: read: connection reset by peer" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:38:55.920087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44370->10.96.0.1:443: read: connection reset by peer" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 08:38:56.834236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 08:38:57.214579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 08:38:57.317580       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 08:38:57.415615       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:38:59.295577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 08:38:59.836456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 08:39:00.105063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:39:00.385919       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 08:39:03.678184       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 08:39:03.884396       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:39:05.558597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 08:39:05.992401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 08:39:13.915248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 08:39:14.177716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:39:16.052532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 08:39:21.348986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 08:39:36.848726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:39:36.848765       1 main.go:301] handling current node
	I1101 08:39:44.050634       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 08:39:44.050742       1 metrics.go:72] Registering metrics
	I1101 08:39:44.050819       1 controller.go:711] "Syncing nftables rules"
	I1101 08:39:46.848720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:39:46.848761       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3fe1bad2780573120635e8db1b3015c34c6baba39dcf1a13a042c193a0219f8b] <==
	I1101 08:40:10.398904       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 08:40:10.400453       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 08:40:10.400688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 08:40:10.418371       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1101 08:40:10.443828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 08:40:11.122847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 08:40:11.268739       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 08:40:12.295999       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 08:40:12.559725       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 08:40:12.629762       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 08:40:12.637155       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 08:40:13.716881       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 08:40:13.991906       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 08:40:14.042214       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 08:40:29.658695       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.214.179"}
	I1101 08:40:35.269948       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.81.142"}
	I1101 08:40:54.862377       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.91.193"}
	I1101 08:41:04.061502       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 08:41:04.345334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.203.99"}
	I1101 08:41:04.375989       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.170.227"}
	E1101 08:41:29.279301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38150: use of closed network connection
	E1101 08:41:29.935842       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1101 08:41:38.275917       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44734: use of closed network connection
	I1101 08:41:38.708367       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.194.92"}
	I1101 08:50:10.326683       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9059a93be28aba4c1d33db635090fe1ca431347079a91b876a1c84d8155b77e4] <==
	I1101 08:40:13.743974       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 08:40:13.743980       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 08:40:13.750921       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:40:13.750963       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 08:40:13.750978       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 08:40:13.755183       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:40:13.762378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:40:13.765757       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 08:40:13.772002       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 08:40:13.778198       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 08:40:13.784849       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 08:40:13.785984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 08:40:13.786075       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:40:13.786116       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 08:40:13.786120       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 08:40:13.786171       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1101 08:41:04.158214       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.171195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.176406       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.193830       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.201299       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.201478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.216707       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.220352       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:41:04.221741       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [b66b1d8d0a4a9db0d989ab50d4a8892a72f3a92359e3a54963bb27592222fb3a] <==
	I1101 08:39:31.178699       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:39:31.178778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 08:39:31.182343       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 08:39:31.184177       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 08:39:31.186517       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 08:39:31.186613       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 08:39:31.186697       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-700813"
	I1101 08:39:31.186736       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 08:39:31.189068       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 08:39:31.190812       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:39:31.192020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:39:31.193088       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:39:31.199363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:39:31.199424       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 08:39:31.199454       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 08:39:31.203614       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 08:39:31.204363       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 08:39:31.204523       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 08:39:31.204762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 08:39:31.204808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 08:39:31.204845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 08:39:31.204870       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 08:39:31.204976       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 08:39:31.205157       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 08:39:31.212311       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [1e09a82df1d9907724252e223e9fcf0dac191e545fa828ca06267e176b1909d5] <==
	I1101 08:40:11.772914       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:40:11.863829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:40:11.964014       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:40:11.964052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:40:11.964142       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:40:12.008836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:40:12.008985       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:40:12.018269       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:40:12.018646       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:40:12.018710       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:40:12.023656       1 config.go:200] "Starting service config controller"
	I1101 08:40:12.023749       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:40:12.023793       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:40:12.023821       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:40:12.023895       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:40:12.023924       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:40:12.027588       1 config.go:309] "Starting node config controller"
	I1101 08:40:12.027679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:40:12.027729       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:40:12.123937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:40:12.124005       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:40:12.125668       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7ca671fdb4b92330af2587e3664cb2261cf5db61ef6a2f25bbba64f90e8eb011] <==
	E1101 08:38:55.917158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-700813&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:34564->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:38:57.292955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-700813&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:38:59.043680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-700813&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:39:03.743930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-700813&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:39:10.588587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-700813&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 08:39:34.369864       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:39:34.369903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:39:34.370040       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:39:34.387614       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:39:34.387674       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:39:34.391358       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:39:34.391638       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:39:34.391658       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:39:34.393599       1 config.go:200] "Starting service config controller"
	I1101 08:39:34.393684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:39:34.393728       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:39:34.393755       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:39:34.393793       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:39:34.393821       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:39:34.394517       1 config.go:309] "Starting node config controller"
	I1101 08:39:34.394570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:39:34.394598       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:39:34.493794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:39:34.493905       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 08:39:34.493931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6e49e7ea7b279cadf6fd9f4de88f8088a1dd965a76cc1ea84091b52be2819ec8] <==
	I1101 08:40:09.913194       1 serving.go:386] Generated self-signed cert in-memory
	I1101 08:40:11.014685       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 08:40:11.014725       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:40:11.019641       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 08:40:11.019756       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 08:40:11.019828       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:40:11.019894       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:40:11.019939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:40:11.019981       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:40:11.020168       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 08:40:11.020257       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 08:40:11.120656       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:40:11.120807       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 08:40:11.120949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [758a9b79ed06633eaa0e94bcbec7e1d1bf8a6d53e6bc48eab4b79b25a814cfb0] <==
	E1101 08:39:10.898345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:39:10.941603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:39:11.225238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:39:11.961056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:39:12.306108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:39:12.453638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:39:12.680960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:39:12.703698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:39:12.926244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:39:13.159653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:39:13.348090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:39:13.469372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:39:13.796870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:39:13.824667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:39:14.478994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:39:14.690532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 08:39:15.970168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:39:17.520678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 08:39:29.535797       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:39:54.049835       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 08:39:54.049870       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 08:39:54.049889       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 08:39:54.049913       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:39:54.050113       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 08:39:54.050129       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 08:49:14 functional-700813 kubelet[4142]: E1101 08:49:14.278960    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:49:20 functional-700813 kubelet[4142]: E1101 08:49:20.280972    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:49:29 functional-700813 kubelet[4142]: E1101 08:49:29.279128    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:49:35 functional-700813 kubelet[4142]: E1101 08:49:35.279304    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:49:42 functional-700813 kubelet[4142]: E1101 08:49:42.279501    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:49:50 functional-700813 kubelet[4142]: E1101 08:49:50.278964    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:49:54 functional-700813 kubelet[4142]: E1101 08:49:54.279122    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:50:05 functional-700813 kubelet[4142]: E1101 08:50:05.278800    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:50:07 functional-700813 kubelet[4142]: E1101 08:50:07.279366    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:50:16 functional-700813 kubelet[4142]: E1101 08:50:16.281074    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:50:20 functional-700813 kubelet[4142]: E1101 08:50:20.280653    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:50:31 functional-700813 kubelet[4142]: E1101 08:50:31.278662    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:50:35 functional-700813 kubelet[4142]: E1101 08:50:35.279079    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:50:43 functional-700813 kubelet[4142]: E1101 08:50:43.278813    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:50:49 functional-700813 kubelet[4142]: E1101 08:50:49.279143    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:50:58 functional-700813 kubelet[4142]: E1101 08:50:58.279984    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:51:04 functional-700813 kubelet[4142]: E1101 08:51:04.280294    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:51:09 functional-700813 kubelet[4142]: E1101 08:51:09.279381    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:51:16 functional-700813 kubelet[4142]: E1101 08:51:16.281177    4142 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 01 08:51:16 functional-700813 kubelet[4142]: E1101 08:51:16.281219    4142 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Nov 01 08:51:16 functional-700813 kubelet[4142]: E1101 08:51:16.281287    4142 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-bkh8t_default(e0d3b814-c1bc-4132-baa8-50d76bf12ab6): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Nov 01 08:51:16 functional-700813 kubelet[4142]: E1101 08:51:16.281313    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:51:20 functional-700813 kubelet[4142]: E1101 08:51:20.279161    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	Nov 01 08:51:27 functional-700813 kubelet[4142]: E1101 08:51:27.278684    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bkh8t" podUID="e0d3b814-c1bc-4132-baa8-50d76bf12ab6"
	Nov 01 08:51:32 functional-700813 kubelet[4142]: E1101 08:51:32.279613    4142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wp8q9" podUID="05c34bd6-9fb7-4c65-80b4-a895f26d58d6"
	
	
	==> kubernetes-dashboard [811568a70c364ee3d90b29608b100daa29c0e5fc4a1a53e73d6c8cc9fc36255f] <==
	2025/11/01 08:41:08 Starting overwatch
	2025/11/01 08:41:08 Using namespace: kubernetes-dashboard
	2025/11/01 08:41:08 Using in-cluster config to connect to apiserver
	2025/11/01 08:41:08 Using secret token for csrf signing
	2025/11/01 08:41:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 08:41:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 08:41:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 08:41:08 Generating JWE encryption key
	2025/11/01 08:41:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 08:41:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 08:41:09 Initializing JWE encryption key from synchronized object
	2025/11/01 08:41:09 Creating in-cluster Sidecar client
	2025/11/01 08:41:09 Serving insecurely on HTTP port: 9090
	2025/11/01 08:41:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 08:41:39 Successful request to sidecar
	
	
	==> storage-provisioner [0593f9a73d48a2d215421727a222aeba407848faddde9032e26a1478b6465cf2] <==
	I1101 08:39:49.541330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 08:39:49.556803       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 08:39:49.556866       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 08:39:49.560570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:39:53.015562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [1a460e8626e92c5c698d11027caa4fe03f060ce9acedda4cc4a2ec98e77cc125] <==
	W1101 08:51:16.131373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:18.134934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:18.139655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:20.142243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:20.146873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:22.150011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:22.154284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:24.157862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:24.162171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:26.165592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:26.169737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:28.172350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:28.176591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:30.180214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:30.187287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:32.189937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:32.193921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:34.197243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:34.201285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:36.204294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:36.208851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:38.211712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:38.216076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:40.219303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:51:40.224047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-700813 -n functional-700813
helpers_test.go:269: (dbg) Run:  kubectl --context functional-700813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bkh8t hello-node-connect-7d85dfc575-wp8q9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-700813 describe pod busybox-mount hello-node-75c85bcc94-bkh8t hello-node-connect-7d85dfc575-wp8q9
helpers_test.go:290: (dbg) kubectl --context functional-700813 describe pod busybox-mount hello-node-75c85bcc94-bkh8t hello-node-connect-7d85dfc575-wp8q9:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-700813/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:40:43 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5fd70fd4e66d0cd759eecb1dd504c2a07ba3a3a6f716a26dfc6bed803d8b09ee
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 08:40:46 +0000
	      Finished:     Sat, 01 Nov 2025 08:40:46 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gp8lf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gp8lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-700813
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.016s (2.016s including waiting). Image size: 3774172 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bkh8t
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-700813/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:40:35 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b42p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8b42p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bkh8t to functional-700813
	  Normal   Pulling    8m9s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m9s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     8m9s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    52s (x44 over 11m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     52s (x44 over 11m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-wp8q9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-700813/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:41:38 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xn9qg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xn9qg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wp8q9 to functional-700813
	  Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-700813 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-700813 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bkh8t" [e0d3b814-c1bc-4132-baa8-50d76bf12ab6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-700813 -n functional-700813
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 08:50:35.609145482 +0000 UTC m=+1272.598405395
functional_test.go:1460: (dbg) Run:  kubectl --context functional-700813 describe po hello-node-75c85bcc94-bkh8t -n default
functional_test.go:1460: (dbg) kubectl --context functional-700813 describe po hello-node-75c85bcc94-bkh8t -n default:
Name:             hello-node-75c85bcc94-bkh8t
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-700813/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:40:35 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b42p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8b42p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bkh8t to functional-700813
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m46s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-700813 logs hello-node-75c85bcc94-bkh8t -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-700813 logs hello-node-75c85bcc94-bkh8t -n default: exit status 1 (101.206995ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bkh8t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-700813 logs hello-node-75c85bcc94-bkh8t -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image load --daemon kicbase/echo-server:functional-700813 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-700813" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image load --daemon kicbase/echo-server:functional-700813 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-700813" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-700813
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image load --daemon kicbase/echo-server:functional-700813 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-700813" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image save kicbase/echo-server:functional-700813 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 08:40:41.408362 2339430 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:40:41.409820 2339430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:41.409842 2339430 out.go:374] Setting ErrFile to fd 2...
	I1101 08:40:41.409856 2339430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:41.410214 2339430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:40:41.410984 2339430 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:40:41.411112 2339430 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:40:41.411619 2339430 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
	I1101 08:40:41.431517 2339430 ssh_runner.go:195] Run: systemctl --version
	I1101 08:40:41.431575 2339430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
	I1101 08:40:41.452298 2339430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
	I1101 08:40:41.562216 2339430 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1101 08:40:41.562289 2339430 cache_images.go:255] Failed to load cached images for "functional-700813": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1101 08:40:41.562314 2339430 cache_images.go:267] failed pushing to: functional-700813

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-700813
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image save --daemon kicbase/echo-server:functional-700813 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-700813
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-700813: exit status 1 (19.154949ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-700813

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-700813

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 service --namespace=default --https --url hello-node: exit status 115 (388.598397ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31986
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-700813 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 service hello-node --url --format={{.IP}}: exit status 115 (394.109306ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-700813 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 service hello-node --url: exit status 115 (384.231554ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31986
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-700813 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31986
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-623514 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-623514 --output=json --user=testUser: exit status 80 (2.53071257s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd9f6b03-a4a7-4b0d-844d-483998057eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-623514 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"185062cb-e5da-4de3-b729-aaa6396ef767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:04:27Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"415d6483-397c-4e21-b01d-6a20c8ec5607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-623514 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-623514 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-623514 --output=json --user=testUser: exit status 80 (1.762214181s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa95ffd9-701e-4487-bd80-dea2458a8dfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-623514 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0ae0604e-8db4-4f5b-b50c-46c479f0d491","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:04:29Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6b77645a-8bec-4552-8d30-531d16031d1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-623514 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.76s)

                                                
                                    
x
+
TestScheduledStopUnix (37.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-026741 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-026741 --memory=3072 --driver=docker  --container-runtime=crio: (33.045329893s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-026741 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-026741 -n scheduled-stop-026741
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-026741 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 2444043 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-11-01 09:19:34.831100966 +0000 UTC m=+3011.820360937
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-026741
helpers_test.go:243: (dbg) docker inspect scheduled-stop-026741:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72",
	        "Created": "2025-11-01T09:19:06.63745057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2442245,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:19:06.698304201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72/hosts",
	        "LogPath": "/var/lib/docker/containers/f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72/f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72-json.log",
	        "Name": "/scheduled-stop-026741",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-026741:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-026741",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7d5621fac49b361e6a55283a349cc116f9e5c5289300aa9851c0fbc98b75e72",
	                "LowerDir": "/var/lib/docker/overlay2/95c49e3227842a700535d5af3326521c69650eaeb91627f0712f04e738c23ec2-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95c49e3227842a700535d5af3326521c69650eaeb91627f0712f04e738c23ec2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95c49e3227842a700535d5af3326521c69650eaeb91627f0712f04e738c23ec2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95c49e3227842a700535d5af3326521c69650eaeb91627f0712f04e738c23ec2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-026741",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-026741/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-026741",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-026741",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-026741",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "959fef011949ac6192d5e352280838dd492eba447bccd2eff40466c2129edc54",
	            "SandboxKey": "/var/run/docker/netns/959fef011949",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36252"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-026741": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:7c:a2:c5:16:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ddbd991ab57eaec3a55eb68512f7a96cc7ecb6143d4862362c4d6ace022e0da",
	                    "EndpointID": "597d22e3c5b527da44de465f9aa213fa891fa97a999e1b247da406f56da816fb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-026741",
	                        "f7d5621fac49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-026741 -n scheduled-stop-026741
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-026741 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-697878                                                                                                                                       │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:13 UTC │ 01 Nov 25 09:13 UTC │
	│ start   │ -p multinode-697878 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:13 UTC │ 01 Nov 25 09:14 UTC │
	│ node    │ list -p multinode-697878                                                                                                                                  │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │                     │
	│ node    │ multinode-697878 node delete m03                                                                                                                          │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │ 01 Nov 25 09:14 UTC │
	│ stop    │ multinode-697878 stop                                                                                                                                     │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p multinode-697878 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ node    │ list -p multinode-697878                                                                                                                                  │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p multinode-697878-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-697878-m02  │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p multinode-697878-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-697878-m03  │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ node    │ add -p multinode-697878                                                                                                                                   │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ delete  │ -p multinode-697878-m03                                                                                                                                   │ multinode-697878-m03  │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p multinode-697878                                                                                                                                       │ multinode-697878      │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p test-preload-108541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:17 UTC │
	│ image   │ test-preload-108541 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:17 UTC │ 01 Nov 25 09:17 UTC │
	│ stop    │ -p test-preload-108541                                                                                                                                    │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:17 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p test-preload-108541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ image   │ test-preload-108541 image list                                                                                                                            │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ delete  │ -p test-preload-108541                                                                                                                                    │ test-preload-108541   │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p scheduled-stop-026741 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p scheduled-stop-026741 --schedule 5m                                                                                                                    │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p scheduled-stop-026741 --schedule 5m                                                                                                                    │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p scheduled-stop-026741 --schedule 5m                                                                                                                    │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p scheduled-stop-026741 --schedule 15s                                                                                                                   │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p scheduled-stop-026741 --schedule 15s                                                                                                                   │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p scheduled-stop-026741 --schedule 15s                                                                                                                   │ scheduled-stop-026741 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:19:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:19:01.313126 2441857 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:19:01.313267 2441857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:19:01.313275 2441857 out.go:374] Setting ErrFile to fd 2...
	I1101 09:19:01.313278 2441857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:19:01.313567 2441857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:19:01.314128 2441857 out.go:368] Setting JSON to false
	I1101 09:19:01.315153 2441857 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":64887,"bootTime":1761923854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:19:01.315225 2441857 start.go:143] virtualization:  
	I1101 09:19:01.319064 2441857 out.go:179] * [scheduled-stop-026741] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:19:01.323562 2441857 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:19:01.323619 2441857 notify.go:221] Checking for updates...
	I1101 09:19:01.330292 2441857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:19:01.333404 2441857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:19:01.336643 2441857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:19:01.339845 2441857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:19:01.342965 2441857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:19:01.346147 2441857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:19:01.375044 2441857 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:19:01.375154 2441857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:19:01.435092 2441857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:19:01.426302183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:19:01.435188 2441857 docker.go:319] overlay module found
	I1101 09:19:01.438412 2441857 out.go:179] * Using the docker driver based on user configuration
	I1101 09:19:01.441434 2441857 start.go:309] selected driver: docker
	I1101 09:19:01.441443 2441857 start.go:930] validating driver "docker" against <nil>
	I1101 09:19:01.441455 2441857 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:19:01.442159 2441857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:19:01.495628 2441857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-01 09:19:01.486961235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:19:01.495777 2441857 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:19:01.496020 2441857 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:19:01.499011 2441857 out.go:179] * Using Docker driver with root privileges
	I1101 09:19:01.501946 2441857 cni.go:84] Creating CNI manager for ""
	I1101 09:19:01.502000 2441857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:01.502027 2441857 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:19:01.502093 2441857 start.go:353] cluster config:
	{Name:scheduled-stop-026741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-026741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:19:01.507312 2441857 out.go:179] * Starting "scheduled-stop-026741" primary control-plane node in "scheduled-stop-026741" cluster
	I1101 09:19:01.510280 2441857 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:19:01.513277 2441857 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:19:01.516424 2441857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:19:01.516517 2441857 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:19:01.516558 2441857 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:19:01.516565 2441857 cache.go:59] Caching tarball of preloaded images
	I1101 09:19:01.516679 2441857 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:19:01.516688 2441857 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:19:01.517021 2441857 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/config.json ...
	I1101 09:19:01.517038 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/config.json: {Name:mke2753baa8656d1482b63b7b4470beff965e1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:01.537621 2441857 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:19:01.537634 2441857 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:19:01.537645 2441857 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:19:01.537677 2441857 start.go:360] acquireMachinesLock for scheduled-stop-026741: {Name:mk025522ae36e9c4a186ea67e85b46ebdaa0e161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:19:01.537786 2441857 start.go:364] duration metric: took 94.898µs to acquireMachinesLock for "scheduled-stop-026741"
	I1101 09:19:01.537812 2441857 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-026741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-026741 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:19:01.537878 2441857 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:19:01.541331 2441857 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:19:01.541572 2441857 start.go:159] libmachine.API.Create for "scheduled-stop-026741" (driver="docker")
	I1101 09:19:01.541594 2441857 client.go:173] LocalClient.Create starting
	I1101 09:19:01.541653 2441857 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:19:01.541688 2441857 main.go:143] libmachine: Decoding PEM data...
	I1101 09:19:01.541703 2441857 main.go:143] libmachine: Parsing certificate...
	I1101 09:19:01.541757 2441857 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:19:01.541781 2441857 main.go:143] libmachine: Decoding PEM data...
	I1101 09:19:01.541790 2441857 main.go:143] libmachine: Parsing certificate...
	I1101 09:19:01.542186 2441857 cli_runner.go:164] Run: docker network inspect scheduled-stop-026741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:19:01.560410 2441857 cli_runner.go:211] docker network inspect scheduled-stop-026741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:19:01.560560 2441857 network_create.go:284] running [docker network inspect scheduled-stop-026741] to gather additional debugging logs...
	I1101 09:19:01.560583 2441857 cli_runner.go:164] Run: docker network inspect scheduled-stop-026741
	W1101 09:19:01.576165 2441857 cli_runner.go:211] docker network inspect scheduled-stop-026741 returned with exit code 1
	I1101 09:19:01.576185 2441857 network_create.go:287] error running [docker network inspect scheduled-stop-026741]: docker network inspect scheduled-stop-026741: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-026741 not found
	I1101 09:19:01.576196 2441857 network_create.go:289] output of [docker network inspect scheduled-stop-026741]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-026741 not found
	
	** /stderr **
	I1101 09:19:01.576303 2441857 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:19:01.592242 2441857 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:19:01.592576 2441857 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:19:01.592868 2441857 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:19:01.593219 2441857 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001857e80}
	I1101 09:19:01.593234 2441857 network_create.go:124] attempt to create docker network scheduled-stop-026741 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:19:01.593296 2441857 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-026741 scheduled-stop-026741
	I1101 09:19:01.658971 2441857 network_create.go:108] docker network scheduled-stop-026741 192.168.76.0/24 created
	I1101 09:19:01.659010 2441857 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-026741" container
	I1101 09:19:01.659087 2441857 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:19:01.673321 2441857 cli_runner.go:164] Run: docker volume create scheduled-stop-026741 --label name.minikube.sigs.k8s.io=scheduled-stop-026741 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:19:01.690983 2441857 oci.go:103] Successfully created a docker volume scheduled-stop-026741
	I1101 09:19:01.691078 2441857 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-026741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-026741 --entrypoint /usr/bin/test -v scheduled-stop-026741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:19:02.205500 2441857 oci.go:107] Successfully prepared a docker volume scheduled-stop-026741
	I1101 09:19:02.205546 2441857 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:19:02.205596 2441857 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:19:02.205700 2441857 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-026741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:19:06.568544 2441857 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-026741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.362803137s)
	I1101 09:19:06.568565 2441857 kic.go:203] duration metric: took 4.36297398s to extract preloaded images to volume ...
	W1101 09:19:06.568697 2441857 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:19:06.568792 2441857 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:19:06.623826 2441857 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-026741 --name scheduled-stop-026741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-026741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-026741 --network scheduled-stop-026741 --ip 192.168.76.2 --volume scheduled-stop-026741:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:19:06.900890 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Running}}
	I1101 09:19:06.923364 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:06.946567 2441857 cli_runner.go:164] Run: docker exec scheduled-stop-026741 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:19:07.003353 2441857 oci.go:144] the created container "scheduled-stop-026741" has a running status.
	I1101 09:19:07.003398 2441857 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa...
	I1101 09:19:07.292500 2441857 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:19:07.311677 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:07.332077 2441857 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:19:07.332088 2441857 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-026741 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:19:07.416767 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:07.441361 2441857 machine.go:94] provisionDockerMachine start ...
	I1101 09:19:07.441436 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:07.461380 2441857 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:07.461694 2441857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36250 <nil> <nil>}
	I1101 09:19:07.461700 2441857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:19:07.464096 2441857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46186->127.0.0.1:36250: read: connection reset by peer
	I1101 09:19:10.615287 2441857 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-026741
	
	I1101 09:19:10.615300 2441857 ubuntu.go:182] provisioning hostname "scheduled-stop-026741"
	I1101 09:19:10.615358 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:10.632271 2441857 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:10.632564 2441857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36250 <nil> <nil>}
	I1101 09:19:10.632585 2441857 main.go:143] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-026741 && echo "scheduled-stop-026741" | sudo tee /etc/hostname
	I1101 09:19:10.788509 2441857 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-026741
	
	I1101 09:19:10.788580 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:10.804870 2441857 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:10.805172 2441857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36250 <nil> <nil>}
	I1101 09:19:10.805187 2441857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-026741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-026741/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-026741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:19:10.951885 2441857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:19:10.951901 2441857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:19:10.951918 2441857 ubuntu.go:190] setting up certificates
	I1101 09:19:10.951926 2441857 provision.go:84] configureAuth start
	I1101 09:19:10.951981 2441857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-026741
	I1101 09:19:10.967441 2441857 provision.go:143] copyHostCerts
	I1101 09:19:10.967495 2441857 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:19:10.967503 2441857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:19:10.967577 2441857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:19:10.967662 2441857 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:19:10.967666 2441857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:19:10.967696 2441857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:19:10.967745 2441857 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:19:10.967749 2441857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:19:10.967770 2441857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:19:10.967812 2441857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-026741 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-026741]
	I1101 09:19:11.097279 2441857 provision.go:177] copyRemoteCerts
	I1101 09:19:11.097336 2441857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:19:11.097374 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.114833 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:11.219222 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:19:11.235412 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1101 09:19:11.252040 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:19:11.268534 2441857 provision.go:87] duration metric: took 316.585881ms to configureAuth
	I1101 09:19:11.268551 2441857 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:19:11.268734 2441857 config.go:182] Loaded profile config "scheduled-stop-026741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:19:11.268838 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.285470 2441857 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:11.285769 2441857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36250 <nil> <nil>}
	I1101 09:19:11.285781 2441857 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:19:11.537638 2441857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:19:11.537649 2441857 machine.go:97] duration metric: took 4.096277265s to provisionDockerMachine
	I1101 09:19:11.537658 2441857 client.go:176] duration metric: took 9.99605908s to LocalClient.Create
	I1101 09:19:11.537685 2441857 start.go:167] duration metric: took 9.996109786s to libmachine.API.Create "scheduled-stop-026741"
	I1101 09:19:11.537693 2441857 start.go:293] postStartSetup for "scheduled-stop-026741" (driver="docker")
	I1101 09:19:11.537701 2441857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:19:11.537762 2441857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:19:11.537804 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.554602 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:11.660089 2441857 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:19:11.663486 2441857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:19:11.663506 2441857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:19:11.663515 2441857 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:19:11.663567 2441857 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:19:11.663645 2441857 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:19:11.663750 2441857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:19:11.671314 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:19:11.689107 2441857 start.go:296] duration metric: took 151.400577ms for postStartSetup
	I1101 09:19:11.689467 2441857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-026741
	I1101 09:19:11.705890 2441857 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/config.json ...
	I1101 09:19:11.706151 2441857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:19:11.706196 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.722133 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:11.824726 2441857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:19:11.829137 2441857 start.go:128] duration metric: took 10.291246505s to createHost
	I1101 09:19:11.829151 2441857 start.go:83] releasing machines lock for "scheduled-stop-026741", held for 10.291359241s
	I1101 09:19:11.829224 2441857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-026741
	I1101 09:19:11.845946 2441857 ssh_runner.go:195] Run: cat /version.json
	I1101 09:19:11.845989 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.846236 2441857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:19:11.846288 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:11.868068 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:11.868651 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:12.059121 2441857 ssh_runner.go:195] Run: systemctl --version
	I1101 09:19:12.065578 2441857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:19:12.104703 2441857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:19:12.108794 2441857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:19:12.108852 2441857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:19:12.137053 2441857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:19:12.137065 2441857 start.go:496] detecting cgroup driver to use...
	I1101 09:19:12.137096 2441857 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:19:12.137165 2441857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:19:12.153873 2441857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:19:12.166628 2441857 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:19:12.166682 2441857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:19:12.184835 2441857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:19:12.204488 2441857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:19:12.311405 2441857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:19:12.432367 2441857 docker.go:234] disabling docker service ...
	I1101 09:19:12.432436 2441857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:19:12.453864 2441857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:19:12.466897 2441857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:19:12.582833 2441857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:19:12.708297 2441857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:19:12.721312 2441857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:19:12.735491 2441857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:19:12.735558 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.744556 2441857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:19:12.744614 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.753910 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.762318 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.770965 2441857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:19:12.778823 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.786879 2441857 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.799794 2441857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:12.808130 2441857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:19:12.815417 2441857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:19:12.822922 2441857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:12.937741 2441857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:19:13.064288 2441857 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:19:13.064351 2441857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:19:13.068004 2441857 start.go:564] Will wait 60s for crictl version
	I1101 09:19:13.068051 2441857 ssh_runner.go:195] Run: which crictl
	I1101 09:19:13.071598 2441857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:19:13.096140 2441857 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:19:13.096233 2441857 ssh_runner.go:195] Run: crio --version
	I1101 09:19:13.123555 2441857 ssh_runner.go:195] Run: crio --version
	I1101 09:19:13.162950 2441857 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:19:13.165771 2441857 cli_runner.go:164] Run: docker network inspect scheduled-stop-026741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:19:13.181307 2441857 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:19:13.185114 2441857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:19:13.194307 2441857 kubeadm.go:884] updating cluster {Name:scheduled-stop-026741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-026741 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:19:13.194412 2441857 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:19:13.194463 2441857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:19:13.228366 2441857 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:19:13.228377 2441857 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:19:13.228437 2441857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:19:13.252604 2441857 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:19:13.252616 2441857 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:19:13.252622 2441857 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:19:13.252709 2441857 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-026741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-026741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:19:13.252813 2441857 ssh_runner.go:195] Run: crio config
	I1101 09:19:13.305923 2441857 cni.go:84] Creating CNI manager for ""
	I1101 09:19:13.305933 2441857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:13.305946 2441857 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:19:13.305969 2441857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-026741 NodeName:scheduled-stop-026741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:19:13.306098 2441857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-026741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:19:13.306162 2441857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:19:13.313950 2441857 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:19:13.314008 2441857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:19:13.321515 2441857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1101 09:19:13.334520 2441857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:19:13.347017 2441857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1101 09:19:13.359473 2441857 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:19:13.363120 2441857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:19:13.372566 2441857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:13.496814 2441857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:19:13.512204 2441857 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741 for IP: 192.168.76.2
	I1101 09:19:13.512224 2441857 certs.go:195] generating shared ca certs ...
	I1101 09:19:13.512239 2441857 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:13.512389 2441857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:19:13.512433 2441857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:19:13.512439 2441857 certs.go:257] generating profile certs ...
	I1101 09:19:13.512544 2441857 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.key
	I1101 09:19:13.512559 2441857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.crt with IP's: []
	I1101 09:19:13.761401 2441857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.crt ...
	I1101 09:19:13.761417 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.crt: {Name:mkfac269f6f85dc8a12f3a1f563873d379dc84e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:13.761622 2441857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.key ...
	I1101 09:19:13.761639 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/client.key: {Name:mk503a72710c1a08e3b489ca60a9db401fa0813e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:13.761735 2441857 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key.d5bca0a7
	I1101 09:19:13.761747 2441857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt.d5bca0a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:19:14.487260 2441857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt.d5bca0a7 ...
	I1101 09:19:14.487277 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt.d5bca0a7: {Name:mk961b34da163eea46b92ccfd2154a07d9579e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:14.487483 2441857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key.d5bca0a7 ...
	I1101 09:19:14.487492 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key.d5bca0a7: {Name:mkd9d56649ffc5fe5e54c619f742a3684ec70b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:14.487574 2441857 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt.d5bca0a7 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt
	I1101 09:19:14.487646 2441857 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key.d5bca0a7 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key
	I1101 09:19:14.487698 2441857 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.key
	I1101 09:19:14.487711 2441857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.crt with IP's: []
	I1101 09:19:14.741141 2441857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.crt ...
	I1101 09:19:14.741156 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.crt: {Name:mkc4a2877aaaac1bc445979f204146e2f2bd4a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:14.741366 2441857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.key ...
	I1101 09:19:14.741373 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.key: {Name:mk58f4f9c5f545955252e6560516f12ad2d7058e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:14.741572 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:19:14.741605 2441857 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:19:14.741612 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:19:14.741634 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:19:14.741654 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:19:14.741675 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:19:14.741726 2441857 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:19:14.742264 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:19:14.760592 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:19:14.777768 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:19:14.795104 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:19:14.811920 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:19:14.829190 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:19:14.846054 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:19:14.862717 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/scheduled-stop-026741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:19:14.879566 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:19:14.897263 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:19:14.913854 2441857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:19:14.931219 2441857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:19:14.943961 2441857 ssh_runner.go:195] Run: openssl version
	I1101 09:19:14.949998 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:19:14.957944 2441857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:19:14.961865 2441857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:19:14.961919 2441857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:19:15.013858 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:19:15.025986 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:19:15.036281 2441857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:19:15.040931 2441857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:19:15.040987 2441857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:19:15.088400 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:19:15.096927 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:19:15.105359 2441857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:15.109320 2441857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:15.109375 2441857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:15.150812 2441857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:19:15.159127 2441857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:19:15.162893 2441857 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:19:15.162947 2441857 kubeadm.go:401] StartCluster: {Name:scheduled-stop-026741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-026741 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:19:15.163013 2441857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:19:15.163077 2441857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:19:15.194559 2441857 cri.go:89] found id: ""
	I1101 09:19:15.194620 2441857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:19:15.202532 2441857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:19:15.210408 2441857 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:19:15.210460 2441857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:19:15.218137 2441857 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:19:15.218146 2441857 kubeadm.go:158] found existing configuration files:
	
	I1101 09:19:15.218200 2441857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:19:15.225534 2441857 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:19:15.225598 2441857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:19:15.232619 2441857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:19:15.239879 2441857 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:19:15.239934 2441857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:19:15.246969 2441857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:19:15.254526 2441857 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:19:15.254577 2441857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:19:15.261655 2441857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:19:15.269794 2441857 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:19:15.269854 2441857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:19:15.277996 2441857 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:19:15.344054 2441857 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:19:15.344284 2441857 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:19:15.412658 2441857 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:19:32.594046 2441857 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:19:32.594097 2441857 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:19:32.594187 2441857 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:19:32.594244 2441857 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:19:32.594279 2441857 kubeadm.go:319] OS: Linux
	I1101 09:19:32.594325 2441857 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:19:32.594374 2441857 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:19:32.594422 2441857 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:19:32.594472 2441857 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:19:32.594521 2441857 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:19:32.594577 2441857 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:19:32.594623 2441857 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:19:32.594672 2441857 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:19:32.594719 2441857 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:19:32.594795 2441857 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:19:32.594892 2441857 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:19:32.594985 2441857 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:19:32.595049 2441857 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:19:32.598002 2441857 out.go:252]   - Generating certificates and keys ...
	I1101 09:19:32.598096 2441857 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:19:32.598165 2441857 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:19:32.598233 2441857 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:19:32.598291 2441857 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:19:32.598352 2441857 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:19:32.598405 2441857 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:19:32.598461 2441857 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:19:32.598589 2441857 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-026741] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:19:32.598643 2441857 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:19:32.598771 2441857 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-026741] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:19:32.598839 2441857 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:19:32.598903 2441857 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:19:32.598949 2441857 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:19:32.599006 2441857 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:19:32.599058 2441857 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:19:32.599115 2441857 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:19:32.599173 2441857 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:19:32.599280 2441857 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:19:32.599346 2441857 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:19:32.599453 2441857 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:19:32.599527 2441857 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:19:32.604295 2441857 out.go:252]   - Booting up control plane ...
	I1101 09:19:32.604413 2441857 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:19:32.604520 2441857 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:19:32.604589 2441857 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:19:32.604713 2441857 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:19:32.604818 2441857 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:19:32.604944 2441857 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:19:32.605043 2441857 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:19:32.605084 2441857 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:19:32.605218 2441857 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:19:32.605335 2441857 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:19:32.605401 2441857 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501817548s
	I1101 09:19:32.605496 2441857 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:19:32.605578 2441857 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:19:32.605683 2441857 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:19:32.605766 2441857 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:19:32.605843 2441857 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.572652812s
	I1101 09:19:32.605912 2441857 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.157572813s
	I1101 09:19:32.605984 2441857 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001712766s
	I1101 09:19:32.606093 2441857 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:19:32.606220 2441857 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:19:32.606288 2441857 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:19:32.606482 2441857 kubeadm.go:319] [mark-control-plane] Marking the node scheduled-stop-026741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:19:32.606538 2441857 kubeadm.go:319] [bootstrap-token] Using token: amdovw.o4avosdzkn0xw372
	I1101 09:19:32.609664 2441857 out.go:252]   - Configuring RBAC rules ...
	I1101 09:19:32.609794 2441857 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:19:32.609881 2441857 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:19:32.610054 2441857 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:19:32.610198 2441857 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:19:32.610319 2441857 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:19:32.610408 2441857 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:19:32.610527 2441857 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:19:32.610571 2441857 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:19:32.610618 2441857 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:19:32.610622 2441857 kubeadm.go:319] 
	I1101 09:19:32.610699 2441857 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:19:32.610702 2441857 kubeadm.go:319] 
	I1101 09:19:32.610782 2441857 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:19:32.610785 2441857 kubeadm.go:319] 
	I1101 09:19:32.610811 2441857 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:19:32.610871 2441857 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:19:32.610923 2441857 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:19:32.610926 2441857 kubeadm.go:319] 
	I1101 09:19:32.610981 2441857 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:19:32.610985 2441857 kubeadm.go:319] 
	I1101 09:19:32.611034 2441857 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:19:32.611037 2441857 kubeadm.go:319] 
	I1101 09:19:32.611091 2441857 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:19:32.611168 2441857 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:19:32.611239 2441857 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:19:32.611242 2441857 kubeadm.go:319] 
	I1101 09:19:32.611329 2441857 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:19:32.611408 2441857 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:19:32.611411 2441857 kubeadm.go:319] 
	I1101 09:19:32.611498 2441857 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token amdovw.o4avosdzkn0xw372 \
	I1101 09:19:32.611612 2441857 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:19:32.611632 2441857 kubeadm.go:319] 	--control-plane 
	I1101 09:19:32.611636 2441857 kubeadm.go:319] 
	I1101 09:19:32.611724 2441857 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:19:32.611727 2441857 kubeadm.go:319] 
	I1101 09:19:32.611812 2441857 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token amdovw.o4avosdzkn0xw372 \
	I1101 09:19:32.612011 2441857 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:19:32.612033 2441857 cni.go:84] Creating CNI manager for ""
	I1101 09:19:32.612040 2441857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:32.615038 2441857 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:19:32.617925 2441857 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:19:32.621832 2441857 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:19:32.621842 2441857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:19:32.634761 2441857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:19:32.920667 2441857 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:19:32.920797 2441857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:32.920887 2441857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-026741 minikube.k8s.io/updated_at=2025_11_01T09_19_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=scheduled-stop-026741 minikube.k8s.io/primary=true
	I1101 09:19:32.938777 2441857 ops.go:34] apiserver oom_adj: -16
	I1101 09:19:33.147895 2441857 kubeadm.go:1114] duration metric: took 227.142362ms to wait for elevateKubeSystemPrivileges
	I1101 09:19:33.147942 2441857 kubeadm.go:403] duration metric: took 17.984998188s to StartCluster
	I1101 09:19:33.147957 2441857 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:33.148013 2441857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:19:33.148697 2441857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:33.148902 2441857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:19:33.148978 2441857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:19:33.149276 2441857 config.go:182] Loaded profile config "scheduled-stop-026741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:19:33.149255 2441857 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:19:33.149375 2441857 addons.go:70] Setting storage-provisioner=true in profile "scheduled-stop-026741"
	I1101 09:19:33.149387 2441857 addons.go:239] Setting addon storage-provisioner=true in "scheduled-stop-026741"
	I1101 09:19:33.149412 2441857 host.go:66] Checking if "scheduled-stop-026741" exists ...
	I1101 09:19:33.149880 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:33.150014 2441857 addons.go:70] Setting default-storageclass=true in profile "scheduled-stop-026741"
	I1101 09:19:33.150023 2441857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-026741"
	I1101 09:19:33.150361 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:33.152753 2441857 out.go:179] * Verifying Kubernetes components...
	I1101 09:19:33.159030 2441857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:33.187153 2441857 addons.go:239] Setting addon default-storageclass=true in "scheduled-stop-026741"
	I1101 09:19:33.187180 2441857 host.go:66] Checking if "scheduled-stop-026741" exists ...
	I1101 09:19:33.187596 2441857 cli_runner.go:164] Run: docker container inspect scheduled-stop-026741 --format={{.State.Status}}
	I1101 09:19:33.196022 2441857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:19:33.199798 2441857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:33.199809 2441857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:19:33.199897 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:33.229209 2441857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:33.229222 2441857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:19:33.229285 2441857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-026741
	I1101 09:19:33.250759 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:33.259480 2441857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36250 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/scheduled-stop-026741/id_rsa Username:docker}
	I1101 09:19:33.369854 2441857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:19:33.421227 2441857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:19:33.518475 2441857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:33.549401 2441857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:33.718360 2441857 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:19:33.719636 2441857 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:19:33.719692 2441857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:19:34.046915 2441857 api_server.go:72] duration metric: took 897.990427ms to wait for apiserver process to appear ...
	I1101 09:19:34.046926 2441857 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:19:34.046943 2441857 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:19:34.049852 2441857 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 09:19:34.052842 2441857 addons.go:515] duration metric: took 903.576577ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 09:19:34.063482 2441857 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:19:34.064769 2441857 api_server.go:141] control plane version: v1.34.1
	I1101 09:19:34.064784 2441857 api_server.go:131] duration metric: took 17.853421ms to wait for apiserver health ...
	I1101 09:19:34.064805 2441857 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:19:34.067478 2441857 system_pods.go:59] 5 kube-system pods found
	I1101 09:19:34.067499 2441857 system_pods.go:61] "etcd-scheduled-stop-026741" [84dd32f4-e451-4c6b-b217-5497ecbeebfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:19:34.067508 2441857 system_pods.go:61] "kube-apiserver-scheduled-stop-026741" [740da596-e3c6-4598-a4fe-452a7bfb6b31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:19:34.067515 2441857 system_pods.go:61] "kube-controller-manager-scheduled-stop-026741" [b045fad0-d9ca-48dc-b7dd-86d9f2fb9120] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:19:34.067521 2441857 system_pods.go:61] "kube-scheduler-scheduled-stop-026741" [f61a438a-79b9-408c-bb36-5b623ef627f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:19:34.067527 2441857 system_pods.go:61] "storage-provisioner" [02c97da5-6a3b-40f0-a589-36b79eba3260] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:19:34.067532 2441857 system_pods.go:74] duration metric: took 2.72269ms to wait for pod list to return data ...
	I1101 09:19:34.067541 2441857 kubeadm.go:587] duration metric: took 918.621132ms to wait for: map[apiserver:true system_pods:true]
	I1101 09:19:34.067553 2441857 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:19:34.070572 2441857 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:19:34.070593 2441857 node_conditions.go:123] node cpu capacity is 2
	I1101 09:19:34.070605 2441857 node_conditions.go:105] duration metric: took 3.048186ms to run NodePressure ...
	I1101 09:19:34.070616 2441857 start.go:242] waiting for startup goroutines ...
	I1101 09:19:34.222050 2441857 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-026741" context rescaled to 1 replicas
	I1101 09:19:34.222083 2441857 start.go:247] waiting for cluster config update ...
	I1101 09:19:34.222095 2441857 start.go:256] writing updated cluster config ...
	I1101 09:19:34.222416 2441857 ssh_runner.go:195] Run: rm -f paused
	I1101 09:19:34.285421 2441857 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:19:34.288811 2441857 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-026741" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.570023245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.574709392Z" level=info msg="Creating container: kube-system/kube-controller-manager-scheduled-stop-026741/kube-controller-manager" id=ce9f7e02-cb54-4d49-86f8-67f9d2b44c4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.57493826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.58092403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.58185047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.584345662Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-026741/kube-apiserver" id=992af4a3-bbd7-400a-8b35-0d667420bade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.584492038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.597207763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.599990463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.600517448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.602113203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.622264152Z" level=info msg="Created container 08fa74d16c441c4efba5066e061692555308e8b127a2c92d504e3bcbd614dbca: kube-system/kube-scheduler-scheduled-stop-026741/kube-scheduler" id=fe650140-98a4-4951-87cc-a470c17efb37 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.626330714Z" level=info msg="Starting container: 08fa74d16c441c4efba5066e061692555308e8b127a2c92d504e3bcbd614dbca" id=9cc0dac9-9445-4cd4-b2a2-883f017103cd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.62906425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.631150502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.631359694Z" level=info msg="Created container e39f00bb80705dc020a48374d36668acb4439d4c8199fada1f56827fc1519751: kube-system/etcd-scheduled-stop-026741/etcd" id=a82d0e9b-ae58-45b7-af78-31eb9b749dd0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.632313694Z" level=info msg="Starting container: e39f00bb80705dc020a48374d36668acb4439d4c8199fada1f56827fc1519751" id=800b026c-8222-4316-810d-3fed8c8a01a4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.632682037Z" level=info msg="Started container" PID=1233 containerID=08fa74d16c441c4efba5066e061692555308e8b127a2c92d504e3bcbd614dbca description=kube-system/kube-scheduler-scheduled-stop-026741/kube-scheduler id=9cc0dac9-9445-4cd4-b2a2-883f017103cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7fc6db0413617b61d2319351d82a7cee74e0d450974d0db71c2cdd9688c62ea
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.636300726Z" level=info msg="Created container 42f17c1f6708f9e506eae21d8cbf9d25f3436927723fa61e5aa8b1ce0bff0441: kube-system/kube-controller-manager-scheduled-stop-026741/kube-controller-manager" id=ce9f7e02-cb54-4d49-86f8-67f9d2b44c4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.637244241Z" level=info msg="Starting container: 42f17c1f6708f9e506eae21d8cbf9d25f3436927723fa61e5aa8b1ce0bff0441" id=bb7fccaf-d1be-4882-8c63-b32239175566 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.637449289Z" level=info msg="Started container" PID=1242 containerID=e39f00bb80705dc020a48374d36668acb4439d4c8199fada1f56827fc1519751 description=kube-system/etcd-scheduled-stop-026741/etcd id=800b026c-8222-4316-810d-3fed8c8a01a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db18ce5a07dfafef5c26ddbc528a93ef7a6861584ef85074c759a33dc2b5c656
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.645108966Z" level=info msg="Started container" PID=1241 containerID=42f17c1f6708f9e506eae21d8cbf9d25f3436927723fa61e5aa8b1ce0bff0441 description=kube-system/kube-controller-manager-scheduled-stop-026741/kube-controller-manager id=bb7fccaf-d1be-4882-8c63-b32239175566 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ba2df0e8d2c77fe0558908b33cee074ff5a5b38c4a861df9a12d6370c92f254
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.676855122Z" level=info msg="Created container 82e2c70dd792fb05f092f4c10947cf3e34869de44233f87fc8b67a43d98a2765: kube-system/kube-apiserver-scheduled-stop-026741/kube-apiserver" id=992af4a3-bbd7-400a-8b35-0d667420bade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.67760514Z" level=info msg="Starting container: 82e2c70dd792fb05f092f4c10947cf3e34869de44233f87fc8b67a43d98a2765" id=ae2436ef-554a-4227-9d01-630ced6d7f3a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:25 scheduled-stop-026741 crio[837]: time="2025-11-01T09:19:25.681920376Z" level=info msg="Started container" PID=1266 containerID=82e2c70dd792fb05f092f4c10947cf3e34869de44233f87fc8b67a43d98a2765 description=kube-system/kube-apiserver-scheduled-stop-026741/kube-apiserver id=ae2436ef-554a-4227-9d01-630ced6d7f3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe42b8d0e124ba8c73faa1f5507310aaa59e992ee1c518b46ee6daeb0fcf5fd6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	82e2c70dd792f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            0                   fe42b8d0e124b       kube-apiserver-scheduled-stop-026741            kube-system
	42f17c1f6708f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   0                   7ba2df0e8d2c7       kube-controller-manager-scheduled-stop-026741   kube-system
	e39f00bb80705       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      0                   db18ce5a07dfa       etcd-scheduled-stop-026741                      kube-system
	08fa74d16c441       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            0                   f7fc6db041361       kube-scheduler-scheduled-stop-026741            kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-026741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-026741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=scheduled-stop-026741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:19:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-026741
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:19:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:19:32 +0000   Sat, 01 Nov 2025 09:19:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:19:32 +0000   Sat, 01 Nov 2025 09:19:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:19:32 +0000   Sat, 01 Nov 2025 09:19:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:19:32 +0000   Sat, 01 Nov 2025 09:19:25 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-026741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                746e68c2-fe00-4c56-abe9-f6047a2893fc
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-026741                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         3s
	  kube-system                 kube-apiserver-scheduled-stop-026741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-scheduled-stop-026741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-scheduled-stop-026741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From     Message
	  ----     ------                   ----  ----     -------
	  Normal   Starting                 4s    kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s    kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3s    kubelet  Node scheduled-stop-026741 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3s    kubelet  Node scheduled-stop-026741 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3s    kubelet  Node scheduled-stop-026741 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Nov 1 08:55] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:56] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:58] overlayfs: idmapped layers are currently not supported
	[  +2.849561] overlayfs: idmapped layers are currently not supported
	[ +35.815790] overlayfs: idmapped layers are currently not supported
	[Nov 1 08:59] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:00] overlayfs: idmapped layers are currently not supported
	[  +4.169917] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:01] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e39f00bb80705dc020a48374d36668acb4439d4c8199fada1f56827fc1519751] <==
	{"level":"warn","ts":"2025-11-01T09:19:27.815801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.849551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.852461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.869095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.885527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.902449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.924553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.941140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.954242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.985235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:27.989278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.013286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.030971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.064602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.080476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.098625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.120265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.137129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.156629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.183946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.212006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.223402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.254057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.284515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:28.392664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33554","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:19:36 up 18:02,  0 user,  load average: 1.58, 1.60, 1.86
	Linux scheduled-stop-026741 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [82e2c70dd792fb05f092f4c10947cf3e34869de44233f87fc8b67a43d98a2765] <==
	I1101 09:19:29.395012       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:19:29.395035       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:19:29.395149       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:19:29.395149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:19:29.395339       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:19:29.396023       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:19:29.396091       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:19:29.398885       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:29.399003       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:19:29.415694       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:19:29.426776       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:29.587448       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:19:30.097567       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:19:30.102873       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:19:30.102896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:19:30.787125       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:19:30.839081       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:19:30.920666       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:19:30.948041       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:19:30.949267       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:19:30.954145       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:19:31.313322       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:19:32.002100       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:19:32.027225       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:19:32.041071       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [42f17c1f6708f9e506eae21d8cbf9d25f3436927723fa61e5aa8b1ce0bff0441] <==
	I1101 09:19:34.970765       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:19:34.970829       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1101 09:19:35.109140       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1101 09:19:35.109168       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1101 09:19:35.109297       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1101 09:19:35.109311       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I1101 09:19:35.308757       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1101 09:19:35.308807       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I1101 09:19:35.460100       1 controllermanager.go:781] "Started controller" controller="service-cidr-controller"
	I1101 09:19:35.460133       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1101 09:19:35.460225       1 servicecidrs_controller.go:137] "Starting" logger="service-cidr-controller" controller="service-cidr-controller"
	I1101 09:19:35.460241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service-cidr-controller"
	I1101 09:19:35.610185       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1101 09:19:35.610245       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1101 09:19:35.610256       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1101 09:19:35.610261       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	I1101 09:19:35.759906       1 controllermanager.go:781] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1101 09:19:35.759977       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1101 09:19:35.759986       1 shared_informer.go:349] "Waiting for caches to sync" controller="PVC protection"
	I1101 09:19:35.911603       1 controllermanager.go:781] "Started controller" controller="volumeattributesclass-protection-controller"
	I1101 09:19:35.911659       1 vac_protection_controller.go:206] "Starting VAC protection controller" logger="volumeattributesclass-protection-controller"
	I1101 09:19:35.911668       1 shared_informer.go:349] "Waiting for caches to sync" controller="VAC protection"
	I1101 09:19:36.060100       1 controllermanager.go:781] "Started controller" controller="endpointslice-mirroring-controller"
	I1101 09:19:36.060246       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1101 09:19:36.060256       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice_mirroring"
	
	
	==> kube-scheduler [08fa74d16c441c4efba5066e061692555308e8b127a2c92d504e3bcbd614dbca] <==
	E1101 09:19:29.338931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:19:29.339211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:19:29.339369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:19:29.339497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:19:29.339798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:19:29.341620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:19:29.341781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:19:29.341891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:19:29.341982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:19:29.342072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:19:29.342120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:19:29.342237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:19:29.342252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:19:29.342916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:19:30.193601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:19:30.209802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:19:30.248848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:19:30.259522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:19:30.279554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:19:30.383584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:19:30.388989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:19:30.402763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:19:30.451518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:19:30.509804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 09:19:33.039951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.369765    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/258d18ee0c1ddef90b9cb3975a5e6d77-kubeconfig\") pod \"kube-scheduler-scheduled-stop-026741\" (UID: \"258d18ee0c1ddef90b9cb3975a5e6d77\") " pod="kube-system/kube-scheduler-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.369824    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ec9eae65d5c45d4cb310800a04250197-etcd-certs\") pod \"etcd-scheduled-stop-026741\" (UID: \"ec9eae65d5c45d4cb310800a04250197\") " pod="kube-system/etcd-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.369845    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370315    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370362    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ec9eae65d5c45d4cb310800a04250197-etcd-data\") pod \"etcd-scheduled-stop-026741\" (UID: \"ec9eae65d5c45d4cb310800a04250197\") " pod="kube-system/etcd-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370389    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf889caeb7b4aa054f370b567d8ed55f-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-026741\" (UID: \"bf889caeb7b4aa054f370b567d8ed55f\") " pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370414    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf889caeb7b4aa054f370b567d8ed55f-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-026741\" (UID: \"bf889caeb7b4aa054f370b567d8ed55f\") " pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370437    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf889caeb7b4aa054f370b567d8ed55f-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-026741\" (UID: \"bf889caeb7b4aa054f370b567d8ed55f\") " pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370467    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370542    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370577    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-ca-certs\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370604    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370630    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf889caeb7b4aa054f370b567d8ed55f-ca-certs\") pod \"kube-apiserver-scheduled-stop-026741\" (UID: \"bf889caeb7b4aa054f370b567d8ed55f\") " pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370653    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf889caeb7b4aa054f370b567d8ed55f-k8s-certs\") pod \"kube-apiserver-scheduled-stop-026741\" (UID: \"bf889caeb7b4aa054f370b567d8ed55f\") " pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.370679    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eddec14d1509304e0d58ed0ca68dfb9c-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-026741\" (UID: \"eddec14d1509304e0d58ed0ca68dfb9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-026741"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.927760    1305 apiserver.go:52] "Watching apiserver"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.966266    1305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.984794    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-026741" podStartSLOduration=2.984775819 podStartE2EDuration="2.984775819s" podCreationTimestamp="2025-11-01 09:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:32.984534907 +0000 UTC m=+1.154382021" watchObservedRunningTime="2025-11-01 09:19:32.984775819 +0000 UTC m=+1.154622934"
	Nov 01 09:19:32 scheduled-stop-026741 kubelet[1305]: I1101 09:19:32.984972    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-026741" podStartSLOduration=1.9849658940000001 podStartE2EDuration="1.984965894s" podCreationTimestamp="2025-11-01 09:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:32.97055927 +0000 UTC m=+1.140406376" watchObservedRunningTime="2025-11-01 09:19:32.984965894 +0000 UTC m=+1.154813000"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: I1101 09:19:33.013886    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-026741" podStartSLOduration=1.013863956 podStartE2EDuration="1.013863956s" podCreationTimestamp="2025-11-01 09:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:32.99798685 +0000 UTC m=+1.167833956" watchObservedRunningTime="2025-11-01 09:19:33.013863956 +0000 UTC m=+1.183711054"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: I1101 09:19:33.030302    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-026741" podStartSLOduration=1.030283669 podStartE2EDuration="1.030283669s" podCreationTimestamp="2025-11-01 09:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:33.015761602 +0000 UTC m=+1.185608708" watchObservedRunningTime="2025-11-01 09:19:33.030283669 +0000 UTC m=+1.200130775"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: I1101 09:19:33.047733    1305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-026741"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: I1101 09:19:33.048134    1305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-026741"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: E1101 09:19:33.069341    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-026741\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-026741"
	Nov 01 09:19:33 scheduled-stop-026741 kubelet[1305]: E1101 09:19:33.083518    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-026741\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-026741"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-026741 -n scheduled-stop-026741
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-026741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-026741 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-026741 describe pod storage-provisioner: exit status 1 (87.247148ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-026741 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-026741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-026741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-026741: (2.146154944s)
--- FAIL: TestScheduledStopUnix (37.78s)

                                                
                                    
x
+
TestPause/serial/Pause (8.45s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-951206 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-951206 --alsologtostderr -v=5: exit status 80 (2.44057573s)

                                                
                                                
-- stdout --
	* Pausing node pause-951206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:25:32.331610 2481263 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:25:32.332773 2481263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:32.332814 2481263 out.go:374] Setting ErrFile to fd 2...
	I1101 09:25:32.332834 2481263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:32.333136 2481263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:25:32.333437 2481263 out.go:368] Setting JSON to false
	I1101 09:25:32.333492 2481263 mustload.go:66] Loading cluster: pause-951206
	I1101 09:25:32.333995 2481263 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:32.334513 2481263 cli_runner.go:164] Run: docker container inspect pause-951206 --format={{.State.Status}}
	I1101 09:25:32.364036 2481263 host.go:66] Checking if "pause-951206" exists ...
	I1101 09:25:32.364379 2481263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:25:32.461489 2481263 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 09:25:32.450777291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:25:32.463388 2481263 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-951206 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:25:32.466382 2481263 out.go:179] * Pausing node pause-951206 ... 
	I1101 09:25:32.470150 2481263 host.go:66] Checking if "pause-951206" exists ...
	I1101 09:25:32.470475 2481263 ssh_runner.go:195] Run: systemctl --version
	I1101 09:25:32.470526 2481263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:32.488760 2481263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:32.596462 2481263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:25:32.618617 2481263 pause.go:52] kubelet running: true
	I1101 09:25:32.618678 2481263 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:25:32.955410 2481263 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:25:32.955497 2481263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:25:33.090478 2481263 cri.go:89] found id: "ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676"
	I1101 09:25:33.090539 2481263 cri.go:89] found id: "15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd"
	I1101 09:25:33.090557 2481263 cri.go:89] found id: "72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27"
	I1101 09:25:33.090579 2481263 cri.go:89] found id: "488f329d3b067e2efa5f2037a5cad268b4ba316f7b54011aaa9c30ec6aee51cc"
	I1101 09:25:33.090597 2481263 cri.go:89] found id: "e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d"
	I1101 09:25:33.090626 2481263 cri.go:89] found id: "1e267b46f1d9e93b1931162cf060aaf69731ece8d757ffde3a2582cfd7651ffb"
	I1101 09:25:33.090648 2481263 cri.go:89] found id: "151ac9fa211263a6da9c02d44bf9df8af1a169e8ad976bb46ffd74c1cc8a3b89"
	I1101 09:25:33.090665 2481263 cri.go:89] found id: "8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	I1101 09:25:33.090683 2481263 cri.go:89] found id: "b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834"
	I1101 09:25:33.090714 2481263 cri.go:89] found id: "ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5"
	I1101 09:25:33.090747 2481263 cri.go:89] found id: "0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d"
	I1101 09:25:33.090763 2481263 cri.go:89] found id: "c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c"
	I1101 09:25:33.090780 2481263 cri.go:89] found id: "50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a"
	I1101 09:25:33.090797 2481263 cri.go:89] found id: "c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8"
	I1101 09:25:33.090823 2481263 cri.go:89] found id: ""
	I1101 09:25:33.090889 2481263 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:25:33.108553 2481263 retry.go:31] will retry after 371.56079ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:33Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:25:33.481071 2481263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:25:33.502703 2481263 pause.go:52] kubelet running: false
	I1101 09:25:33.502824 2481263 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:25:33.766851 2481263 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:25:33.766971 2481263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:25:33.906044 2481263 cri.go:89] found id: "ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676"
	I1101 09:25:33.906107 2481263 cri.go:89] found id: "15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd"
	I1101 09:25:33.906125 2481263 cri.go:89] found id: "72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27"
	I1101 09:25:33.906141 2481263 cri.go:89] found id: "488f329d3b067e2efa5f2037a5cad268b4ba316f7b54011aaa9c30ec6aee51cc"
	I1101 09:25:33.906158 2481263 cri.go:89] found id: "e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d"
	I1101 09:25:33.906187 2481263 cri.go:89] found id: "1e267b46f1d9e93b1931162cf060aaf69731ece8d757ffde3a2582cfd7651ffb"
	I1101 09:25:33.906212 2481263 cri.go:89] found id: "151ac9fa211263a6da9c02d44bf9df8af1a169e8ad976bb46ffd74c1cc8a3b89"
	I1101 09:25:33.906231 2481263 cri.go:89] found id: "8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	I1101 09:25:33.906249 2481263 cri.go:89] found id: "b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834"
	I1101 09:25:33.906270 2481263 cri.go:89] found id: "ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5"
	I1101 09:25:33.906298 2481263 cri.go:89] found id: "0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d"
	I1101 09:25:33.906320 2481263 cri.go:89] found id: "c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c"
	I1101 09:25:33.906338 2481263 cri.go:89] found id: "50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a"
	I1101 09:25:33.906355 2481263 cri.go:89] found id: "c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8"
	I1101 09:25:33.906372 2481263 cri.go:89] found id: ""
	I1101 09:25:33.906445 2481263 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:25:33.926764 2481263 retry.go:31] will retry after 433.220283ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:33Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:25:34.360213 2481263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:25:34.374105 2481263 pause.go:52] kubelet running: false
	I1101 09:25:34.374243 2481263 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:25:34.539974 2481263 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:25:34.540099 2481263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:25:34.632580 2481263 cri.go:89] found id: "ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676"
	I1101 09:25:34.632652 2481263 cri.go:89] found id: "15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd"
	I1101 09:25:34.632671 2481263 cri.go:89] found id: "72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27"
	I1101 09:25:34.632690 2481263 cri.go:89] found id: "488f329d3b067e2efa5f2037a5cad268b4ba316f7b54011aaa9c30ec6aee51cc"
	I1101 09:25:34.632720 2481263 cri.go:89] found id: "e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d"
	I1101 09:25:34.632745 2481263 cri.go:89] found id: "1e267b46f1d9e93b1931162cf060aaf69731ece8d757ffde3a2582cfd7651ffb"
	I1101 09:25:34.632763 2481263 cri.go:89] found id: "151ac9fa211263a6da9c02d44bf9df8af1a169e8ad976bb46ffd74c1cc8a3b89"
	I1101 09:25:34.632780 2481263 cri.go:89] found id: "8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	I1101 09:25:34.632798 2481263 cri.go:89] found id: "b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834"
	I1101 09:25:34.632828 2481263 cri.go:89] found id: "ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5"
	I1101 09:25:34.632850 2481263 cri.go:89] found id: "0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d"
	I1101 09:25:34.632868 2481263 cri.go:89] found id: "c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c"
	I1101 09:25:34.632885 2481263 cri.go:89] found id: "50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a"
	I1101 09:25:34.632911 2481263 cri.go:89] found id: "c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8"
	I1101 09:25:34.632937 2481263 cri.go:89] found id: ""
	I1101 09:25:34.633028 2481263 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:25:34.646803 2481263 out.go:203] 
	W1101 09:25:34.649606 2481263 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:25:34.649626 2481263 out.go:285] * 
	* 
	W1101 09:25:34.661906 2481263 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:25:34.664845 2481263 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-951206 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-951206
helpers_test.go:243: (dbg) docker inspect pause-951206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a",
	        "Created": "2025-11-01T09:23:42.617544868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2471153,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:23:42.690792213Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/hostname",
	        "HostsPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/hosts",
	        "LogPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a-json.log",
	        "Name": "/pause-951206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-951206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-951206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a",
	                "LowerDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-951206",
	                "Source": "/var/lib/docker/volumes/pause-951206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-951206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-951206",
	                "name.minikube.sigs.k8s.io": "pause-951206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34752ceea37b121cdb9cdf5063c0688acb7a287623ef676bf254afeefb206183",
	            "SandboxKey": "/var/run/docker/netns/34752ceea37b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36313"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-951206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:49:1e:6c:2e:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94f1ea4f501e9b1fe920324e249aba2358057c0615454cb6a22317732f3b8aad",
	                    "EndpointID": "2dbb3722a6393f7065b9be3cf10256db6a24ca82887c18688d0a171319e0861f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-951206",
	                        "abfc5fa13ffc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-951206 -n pause-951206
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-951206 -n pause-951206: exit status 2 (403.68939ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-951206 logs -n 25
E1101 09:25:35.275892 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-951206 logs -n 25: (1.794077161s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-206273 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status docker --all --full --no-pager                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat docker --no-pager                                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/docker/daemon.json                                                          │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo docker system info                                                                   │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cri-dockerd --version                                                                │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat containerd --no-pager                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/containerd/config.toml                                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo containerd config dump                                                               │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status crio --all --full --no-pager                                        │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat crio --no-pager                                                        │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo crio config                                                                          │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p cilium-206273                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                     │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:25:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:25:02.268798 2478398 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:25:02.268919 2478398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:02.268924 2478398 out.go:374] Setting ErrFile to fd 2...
	I1101 09:25:02.268928 2478398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:02.269165 2478398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:25:02.269530 2478398 out.go:368] Setting JSON to false
	I1101 09:25:02.270522 2478398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65248,"bootTime":1761923854,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:25:02.270585 2478398 start.go:143] virtualization:  
	I1101 09:25:02.274033 2478398 out.go:179] * [pause-951206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:25:02.278200 2478398 notify.go:221] Checking for updates...
	I1101 09:25:02.282648 2478398 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:25:02.288397 2478398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:25:02.291563 2478398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:25:02.294633 2478398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:25:02.298354 2478398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:25:02.301539 2478398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:25:02.304965 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:02.305513 2478398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:25:02.352707 2478398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:25:02.352821 2478398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:25:02.439228 2478398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:58 SystemTime:2025-11-01 09:25:02.416182009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:25:02.439339 2478398 docker.go:319] overlay module found
	I1101 09:25:02.443015 2478398 out.go:179] * Using the docker driver based on existing profile
	I1101 09:25:02.445878 2478398 start.go:309] selected driver: docker
	I1101 09:25:02.445902 2478398 start.go:930] validating driver "docker" against &{Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:02.446084 2478398 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:25:02.446208 2478398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:25:02.532824 2478398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:25:02.523640442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:25:02.533331 2478398 cni.go:84] Creating CNI manager for ""
	I1101 09:25:02.533407 2478398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:02.533465 2478398 start.go:353] cluster config:
	{Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:02.539887 2478398 out.go:179] * Starting "pause-951206" primary control-plane node in "pause-951206" cluster
	I1101 09:25:02.544035 2478398 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:25:02.548059 2478398 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:25:02.552130 2478398 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:02.552137 2478398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:25:02.552192 2478398 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:25:02.552208 2478398 cache.go:59] Caching tarball of preloaded images
	I1101 09:25:02.552293 2478398 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:25:02.552302 2478398 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:25:02.552443 2478398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/config.json ...
	I1101 09:25:02.584407 2478398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:25:02.584435 2478398 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:25:02.584448 2478398 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:25:02.584586 2478398 start.go:360] acquireMachinesLock for pause-951206: {Name:mkdc7ab99ea2756e15d5e7197b949eac20411fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:25:02.584658 2478398 start.go:364] duration metric: took 40.196µs to acquireMachinesLock for "pause-951206"
	I1101 09:25:02.584683 2478398 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:25:02.584692 2478398 fix.go:54] fixHost starting: 
	I1101 09:25:02.584953 2478398 cli_runner.go:164] Run: docker container inspect pause-951206 --format={{.State.Status}}
	I1101 09:25:02.621156 2478398 fix.go:112] recreateIfNeeded on pause-951206: state=Running err=<nil>
	W1101 09:25:02.621193 2478398 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:25:01.712582 2478201 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:25:01.712904 2478201 start.go:159] libmachine.API.Create for "force-systemd-env-778652" (driver="docker")
	I1101 09:25:01.712948 2478201 client.go:173] LocalClient.Create starting
	I1101 09:25:01.713017 2478201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:25:01.713055 2478201 main.go:143] libmachine: Decoding PEM data...
	I1101 09:25:01.713076 2478201 main.go:143] libmachine: Parsing certificate...
	I1101 09:25:01.713141 2478201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:25:01.713163 2478201 main.go:143] libmachine: Decoding PEM data...
	I1101 09:25:01.713184 2478201 main.go:143] libmachine: Parsing certificate...
	I1101 09:25:01.713587 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:25:01.732633 2478201 cli_runner.go:211] docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:25:01.732743 2478201 network_create.go:284] running [docker network inspect force-systemd-env-778652] to gather additional debugging logs...
	I1101 09:25:01.732770 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652
	W1101 09:25:01.747835 2478201 cli_runner.go:211] docker network inspect force-systemd-env-778652 returned with exit code 1
	I1101 09:25:01.748003 2478201 network_create.go:287] error running [docker network inspect force-systemd-env-778652]: docker network inspect force-systemd-env-778652: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-778652 not found
	I1101 09:25:01.748022 2478201 network_create.go:289] output of [docker network inspect force-systemd-env-778652]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-778652 not found
	
	** /stderr **
	I1101 09:25:01.748251 2478201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:01.767056 2478201 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:25:01.767453 2478201 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:25:01.767822 2478201 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:25:01.768395 2478201 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197bc40}
	I1101 09:25:01.768423 2478201 network_create.go:124] attempt to create docker network force-systemd-env-778652 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:25:01.768500 2478201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-778652 force-systemd-env-778652
	I1101 09:25:01.839131 2478201 network_create.go:108] docker network force-systemd-env-778652 192.168.76.0/24 created
	I1101 09:25:01.839165 2478201 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-778652" container
	I1101 09:25:01.839243 2478201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:25:01.866455 2478201 cli_runner.go:164] Run: docker volume create force-systemd-env-778652 --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:25:01.886587 2478201 oci.go:103] Successfully created a docker volume force-systemd-env-778652
	I1101 09:25:01.886681 2478201 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-778652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --entrypoint /usr/bin/test -v force-systemd-env-778652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:25:02.515832 2478201 oci.go:107] Successfully prepared a docker volume force-systemd-env-778652
	I1101 09:25:02.515894 2478201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:02.515914 2478201 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:25:02.515980 2478201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-778652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:25:02.624713 2478398 out.go:252] * Updating the running docker "pause-951206" container ...
	I1101 09:25:02.624752 2478398 machine.go:94] provisionDockerMachine start ...
	I1101 09:25:02.624844 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:02.647678 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:02.648031 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:02.648058 2478398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:25:02.835632 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-951206
	
	I1101 09:25:02.835656 2478398 ubuntu.go:182] provisioning hostname "pause-951206"
	I1101 09:25:02.835728 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:02.867627 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:02.867984 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:02.868001 2478398 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-951206 && echo "pause-951206" | sudo tee /etc/hostname
	I1101 09:25:03.055577 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-951206
	
	I1101 09:25:03.055663 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:03.086683 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:03.087070 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:03.087094 2478398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-951206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-951206/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-951206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:25:03.264576 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:25:03.264603 2478398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:25:03.264642 2478398 ubuntu.go:190] setting up certificates
	I1101 09:25:03.264657 2478398 provision.go:84] configureAuth start
	I1101 09:25:03.264719 2478398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-951206
	I1101 09:25:03.289653 2478398 provision.go:143] copyHostCerts
	I1101 09:25:03.289726 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:25:03.289747 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:03.289823 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:25:03.289915 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:25:03.289926 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:03.289955 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:25:03.290009 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:25:03.290017 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:03.290041 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:25:03.290089 2478398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.pause-951206 san=[127.0.0.1 192.168.85.2 localhost minikube pause-951206]
	I1101 09:25:03.784606 2478398 provision.go:177] copyRemoteCerts
	I1101 09:25:03.784674 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:25:03.784717 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:03.802621 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:03.920168 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:25:03.955125 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:25:03.974650 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:25:03.994861 2478398 provision.go:87] duration metric: took 730.176559ms to configureAuth
	I1101 09:25:03.994897 2478398 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:25:03.995137 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:03.995275 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:04.018477 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:04.018838 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:04.018860 2478398 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:25:09.383979 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:25:09.384005 2478398 machine.go:97] duration metric: took 6.759244613s to provisionDockerMachine
	I1101 09:25:09.384017 2478398 start.go:293] postStartSetup for "pause-951206" (driver="docker")
	I1101 09:25:09.384028 2478398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:25:09.384091 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:25:09.384139 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.401363 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.503468 2478398 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:25:09.506820 2478398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:25:09.506853 2478398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:25:09.506864 2478398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:25:09.506918 2478398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:25:09.507007 2478398 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:25:09.507114 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:25:09.514440 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:09.531947 2478398 start.go:296] duration metric: took 147.915591ms for postStartSetup
	I1101 09:25:09.532038 2478398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:25:09.532085 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.549027 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.648817 2478398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:25:09.653784 2478398 fix.go:56] duration metric: took 7.069085326s for fixHost
	I1101 09:25:09.653857 2478398 start.go:83] releasing machines lock for "pause-951206", held for 7.069184499s
	I1101 09:25:09.653966 2478398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-951206
	I1101 09:25:09.670153 2478398 ssh_runner.go:195] Run: cat /version.json
	I1101 09:25:09.670205 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.670601 2478398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:25:09.670659 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.694592 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.701792 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.878932 2478398 ssh_runner.go:195] Run: systemctl --version
	I1101 09:25:09.885389 2478398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:25:09.922864 2478398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:25:09.927250 2478398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:25:09.927330 2478398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:25:09.935222 2478398 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:25:09.935245 2478398 start.go:496] detecting cgroup driver to use...
	I1101 09:25:09.935276 2478398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:25:09.935340 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:25:09.950765 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:25:09.963664 2478398 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:25:09.963740 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:25:09.978938 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:25:09.992541 2478398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:25:10.131906 2478398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:25:10.270687 2478398 docker.go:234] disabling docker service ...
	I1101 09:25:10.270752 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:25:10.285724 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:25:10.298589 2478398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:25:10.426585 2478398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:25:10.564507 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:25:10.577299 2478398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:25:10.591665 2478398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:25:10.591729 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.600673 2478398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:25:10.600753 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.610184 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.619079 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.628104 2478398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:25:10.636889 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.646330 2478398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.655237 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.664017 2478398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:25:10.671438 2478398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:25:10.678831 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:10.808449 2478398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:25:10.978460 2478398 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:25:10.978548 2478398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:25:10.983267 2478398 start.go:564] Will wait 60s for crictl version
	I1101 09:25:10.983333 2478398 ssh_runner.go:195] Run: which crictl
	I1101 09:25:10.986965 2478398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:25:11.015254 2478398 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:25:11.015348 2478398 ssh_runner.go:195] Run: crio --version
	I1101 09:25:11.042945 2478398 ssh_runner.go:195] Run: crio --version
	I1101 09:25:11.075812 2478398 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:25:06.686717 2478201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-778652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.170700539s)
	I1101 09:25:06.686753 2478201 kic.go:203] duration metric: took 4.17083478s to extract preloaded images to volume ...
	W1101 09:25:06.686906 2478201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:25:06.687023 2478201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:25:06.759397 2478201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-778652 --name force-systemd-env-778652 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-778652 --network force-systemd-env-778652 --ip 192.168.76.2 --volume force-systemd-env-778652:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:25:07.065610 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Running}}
	I1101 09:25:07.086868 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:07.109141 2478201 cli_runner.go:164] Run: docker exec force-systemd-env-778652 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:25:07.161970 2478201 oci.go:144] the created container "force-systemd-env-778652" has a running status.
	I1101 09:25:07.161998 2478201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa...
	I1101 09:25:08.006371 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 09:25:08.006434 2478201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:25:08.027751 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:08.047931 2478201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:25:08.047958 2478201 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-778652 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:25:08.088582 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:08.107209 2478201 machine.go:94] provisionDockerMachine start ...
	I1101 09:25:08.107316 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:08.124470 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:08.124803 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:08.124817 2478201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:25:08.125463 2478201 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:25:11.287948 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-778652
	
	I1101 09:25:11.287977 2478201 ubuntu.go:182] provisioning hostname "force-systemd-env-778652"
	I1101 09:25:11.288073 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:11.313386 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:11.313682 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:11.313698 2478201 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-778652 && echo "force-systemd-env-778652" | sudo tee /etc/hostname
	I1101 09:25:11.078800 2478398 cli_runner.go:164] Run: docker network inspect pause-951206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:11.096172 2478398 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:25:11.100580 2478398 kubeadm.go:884] updating cluster {Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:25:11.100741 2478398 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:11.100808 2478398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:11.142943 2478398 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:11.142977 2478398 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:25:11.143048 2478398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:11.177461 2478398 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:11.177487 2478398 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:25:11.177496 2478398 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 09:25:11.177655 2478398 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-951206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:25:11.177787 2478398 ssh_runner.go:195] Run: crio config
	I1101 09:25:11.259109 2478398 cni.go:84] Creating CNI manager for ""
	I1101 09:25:11.259142 2478398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:11.259161 2478398 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:25:11.259202 2478398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-951206 NodeName:pause-951206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:25:11.259488 2478398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-951206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:25:11.259591 2478398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:25:11.267301 2478398 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:25:11.267401 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:25:11.274776 2478398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 09:25:11.288571 2478398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:25:11.302134 2478398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:25:11.326583 2478398 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:25:11.330260 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:11.497733 2478398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:11.518479 2478398 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206 for IP: 192.168.85.2
	I1101 09:25:11.518501 2478398 certs.go:195] generating shared ca certs ...
	I1101 09:25:11.518517 2478398 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:11.518669 2478398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:25:11.518723 2478398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:25:11.518731 2478398 certs.go:257] generating profile certs ...
	I1101 09:25:11.518809 2478398 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key
	I1101 09:25:11.518879 2478398 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.key.55d03f72
	I1101 09:25:11.518918 2478398 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.key
	I1101 09:25:11.519025 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:25:11.519051 2478398 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:25:11.519058 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:25:11.519087 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:25:11.519111 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:25:11.519131 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:25:11.519172 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:11.519748 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:25:11.558043 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:25:11.578895 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:25:11.603249 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:25:11.628368 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:25:11.647093 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:25:11.665961 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:25:11.682381 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:25:11.702756 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:25:11.723072 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:25:11.740988 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:25:11.760282 2478398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:25:11.773510 2478398 ssh_runner.go:195] Run: openssl version
	I1101 09:25:11.780020 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:25:11.788191 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.792216 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.792276 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.834292 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:25:11.842125 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:25:11.850176 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.854670 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.854774 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.897052 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:25:11.905291 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:25:11.913755 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.918235 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.918312 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.961677 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:25:11.969727 2478398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:25:11.974292 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:25:12.016776 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:25:12.059319 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:25:12.102433 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:25:12.147326 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:25:12.206188 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:25:12.251551 2478398 kubeadm.go:401] StartCluster: {Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:12.251682 2478398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:25:12.251740 2478398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:25:12.331985 2478398 cri.go:89] found id: "8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	I1101 09:25:12.332009 2478398 cri.go:89] found id: "b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834"
	I1101 09:25:12.332014 2478398 cri.go:89] found id: "ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5"
	I1101 09:25:12.332018 2478398 cri.go:89] found id: "0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d"
	I1101 09:25:12.332022 2478398 cri.go:89] found id: "c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c"
	I1101 09:25:12.332025 2478398 cri.go:89] found id: "50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a"
	I1101 09:25:12.332028 2478398 cri.go:89] found id: "c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8"
	I1101 09:25:12.332031 2478398 cri.go:89] found id: ""
	I1101 09:25:12.332077 2478398 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:25:12.359046 2478398 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:25:12.359129 2478398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:25:12.374100 2478398 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:25:12.374117 2478398 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:25:12.374170 2478398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:25:12.393377 2478398 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:25:12.393915 2478398 kubeconfig.go:125] found "pause-951206" server: "https://192.168.85.2:8443"
	I1101 09:25:12.394473 2478398 kapi.go:59] client config for pause-951206: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key", CAFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:25:12.394957 2478398 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:25:12.394970 2478398 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:25:12.394975 2478398 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:25:12.394986 2478398 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:25:12.394991 2478398 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:25:12.395239 2478398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:25:12.418731 2478398 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:25:12.418762 2478398 kubeadm.go:602] duration metric: took 44.638728ms to restartPrimaryControlPlane
	I1101 09:25:12.418770 2478398 kubeadm.go:403] duration metric: took 167.229312ms to StartCluster
	I1101 09:25:12.418785 2478398 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:12.418846 2478398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:25:12.419527 2478398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:12.419728 2478398 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:25:12.420193 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:12.420268 2478398 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:25:12.425180 2478398 out.go:179] * Verifying Kubernetes components...
	I1101 09:25:12.425289 2478398 out.go:179] * Enabled addons: 
	I1101 09:25:11.501917 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-778652
	
	I1101 09:25:11.502040 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:11.528063 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:11.528702 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:11.528728 2478201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-778652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-778652/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-778652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:25:11.687720 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:25:11.687816 2478201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:25:11.687930 2478201 ubuntu.go:190] setting up certificates
	I1101 09:25:11.687966 2478201 provision.go:84] configureAuth start
	I1101 09:25:11.688041 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:11.711320 2478201 provision.go:143] copyHostCerts
	I1101 09:25:11.711363 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:11.711392 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:25:11.711399 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:11.711469 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:25:11.711551 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:11.711568 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:25:11.711572 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:11.711596 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:25:11.711644 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:11.711660 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:25:11.711664 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:11.711687 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:25:11.711761 2478201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-778652 san=[127.0.0.1 192.168.76.2 force-systemd-env-778652 localhost minikube]
	I1101 09:25:12.136569 2478201 provision.go:177] copyRemoteCerts
	I1101 09:25:12.136693 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:25:12.136760 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.157563 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:12.267890 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:25:12.267960 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:25:12.298413 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:25:12.298480 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 09:25:12.322050 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:25:12.322118 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:25:12.347137 2478201 provision.go:87] duration metric: took 659.134296ms to configureAuth
	I1101 09:25:12.347206 2478201 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:25:12.347427 2478201 config.go:182] Loaded profile config "force-systemd-env-778652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:12.347573 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.375664 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:12.376004 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:12.376020 2478201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:25:12.757729 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:25:12.757753 2478201 machine.go:97] duration metric: took 4.650518581s to provisionDockerMachine
	I1101 09:25:12.757763 2478201 client.go:176] duration metric: took 11.044803182s to LocalClient.Create
	I1101 09:25:12.757775 2478201 start.go:167] duration metric: took 11.044874844s to libmachine.API.Create "force-systemd-env-778652"
	I1101 09:25:12.757828 2478201 start.go:293] postStartSetup for "force-systemd-env-778652" (driver="docker")
	I1101 09:25:12.757838 2478201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:25:12.757920 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:25:12.758002 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.780702 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.007632 2478201 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:25:13.014659 2478201 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:25:13.014691 2478201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:25:13.014702 2478201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:25:13.014758 2478201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:25:13.014849 2478201 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:25:13.014861 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> /etc/ssl/certs/23159822.pem
	I1101 09:25:13.014966 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:25:13.030381 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:13.075284 2478201 start.go:296] duration metric: took 317.440741ms for postStartSetup
	I1101 09:25:13.075669 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:13.115168 2478201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/config.json ...
	I1101 09:25:13.115443 2478201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:25:13.115499 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.144637 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.261490 2478201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:25:13.272537 2478201 start.go:128] duration metric: took 11.563329008s to createHost
	I1101 09:25:13.272563 2478201 start.go:83] releasing machines lock for "force-systemd-env-778652", held for 11.563472577s
	I1101 09:25:13.272637 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:13.299999 2478201 ssh_runner.go:195] Run: cat /version.json
	I1101 09:25:13.300055 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.300287 2478201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:25:13.300346 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.334765 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.341014 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.460062 2478201 ssh_runner.go:195] Run: systemctl --version
	I1101 09:25:13.571125 2478201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:25:13.660212 2478201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:25:13.668910 2478201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:25:13.669012 2478201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:25:13.721483 2478201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:25:13.721508 2478201 start.go:496] detecting cgroup driver to use...
	I1101 09:25:13.721552 2478201 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1101 09:25:13.721631 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:25:13.753843 2478201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:25:13.775938 2478201 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:25:13.776033 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:25:13.806157 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:25:13.834806 2478201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:25:14.045967 2478201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:25:14.248604 2478201 docker.go:234] disabling docker service ...
	I1101 09:25:14.248704 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:25:14.285013 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:25:14.304592 2478201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:25:14.491882 2478201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:25:14.694798 2478201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:25:14.708835 2478201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:25:14.725060 2478201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:25:14.725150 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.742328 2478201 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:25:14.742419 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.754768 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.777234 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.794717 2478201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:25:14.817248 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.842539 2478201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.858318 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.873794 2478201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:25:14.885804 2478201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:25:14.897231 2478201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:15.098357 2478201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:25:15.301190 2478201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:25:15.301299 2478201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:25:15.308627 2478201 start.go:564] Will wait 60s for crictl version
	I1101 09:25:15.308721 2478201 ssh_runner.go:195] Run: which crictl
	I1101 09:25:15.316225 2478201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:25:15.368105 2478201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:25:15.368216 2478201 ssh_runner.go:195] Run: crio --version
	I1101 09:25:15.419150 2478201 ssh_runner.go:195] Run: crio --version
	I1101 09:25:15.459634 2478201 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:25:15.462594 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:15.488089 2478201 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:25:15.492230 2478201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:25:15.501791 2478201 kubeadm.go:884] updating cluster {Name:force-systemd-env-778652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:25:15.501898 2478201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:15.501964 2478201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:15.558755 2478201 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:15.558774 2478201 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:25:15.558828 2478201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:15.605557 2478201 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:15.605628 2478201 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:25:15.605652 2478201 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:25:15.605783 2478201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-778652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:25:15.605897 2478201 ssh_runner.go:195] Run: crio config
	I1101 09:25:15.690986 2478201 cni.go:84] Creating CNI manager for ""
	I1101 09:25:15.691111 2478201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:15.691147 2478201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:25:15.691196 2478201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-778652 NodeName:force-systemd-env-778652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:25:15.691373 2478201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-778652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:25:15.691487 2478201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:25:15.701753 2478201 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:25:15.701884 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:25:15.711986 2478201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1101 09:25:15.729234 2478201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:25:15.747384 2478201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1101 09:25:15.763438 2478201 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:25:15.767697 2478201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:25:15.777212 2478201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:15.958087 2478201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:16.003118 2478201 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652 for IP: 192.168.76.2
	I1101 09:25:16.003199 2478201 certs.go:195] generating shared ca certs ...
	I1101 09:25:16.003233 2478201 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.003475 2478201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:25:16.003566 2478201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:25:16.003606 2478201 certs.go:257] generating profile certs ...
	I1101 09:25:16.003706 2478201 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key
	I1101 09:25:16.003766 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt with IP's: []
	I1101 09:25:12.427944 2478398 addons.go:515] duration metric: took 7.570013ms for enable addons: enabled=[]
	I1101 09:25:12.428095 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:12.802287 2478398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:12.820822 2478398 node_ready.go:35] waiting up to 6m0s for node "pause-951206" to be "Ready" ...
	I1101 09:25:16.537475 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt ...
	I1101 09:25:16.537559 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt: {Name:mk3e0d0b4efbcd31e60ac39b65d28557f5cdc618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.537763 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key ...
	I1101 09:25:16.537810 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key: {Name:mka63bbf35663fd50984ee97e36cece72ef22ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.537931 2478201 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820
	I1101 09:25:16.537979 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:25:16.946351 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 ...
	I1101 09:25:16.946423 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820: {Name:mkcb035b041dd30d5ee448dc9db0a5d2327844bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.946646 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820 ...
	I1101 09:25:16.946684 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820: {Name:mk84d70dc281bc632f90e66ce20e1c4a47e66211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.946812 2478201 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt
	I1101 09:25:16.946924 2478201 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key
	I1101 09:25:16.947023 2478201 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key
	I1101 09:25:16.947067 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt with IP's: []
	I1101 09:25:17.607390 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt ...
	I1101 09:25:17.607468 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt: {Name:mk94032db39ffa2a9aedeaf857e9bb297469459a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:17.607697 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key ...
	I1101 09:25:17.607735 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key: {Name:mke888c55a7e70dad1d23b1660cd5ea205743208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:17.607874 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:25:17.607920 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:25:17.607952 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:25:17.607986 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:25:17.608024 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:25:17.608057 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:25:17.608093 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:25:17.608129 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:25:17.608207 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:25:17.608267 2478201 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:25:17.608291 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:25:17.608335 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:25:17.608388 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:25:17.608429 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:25:17.608507 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:17.608574 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> /usr/share/ca-certificates/23159822.pem
	I1101 09:25:17.608613 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:17.608643 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem -> /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.609242 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:25:17.656717 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:25:17.685370 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:25:17.706194 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:25:17.731455 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 09:25:17.757939 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:25:17.790063 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:25:17.813821 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:25:17.847737 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:25:17.870652 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:25:17.891454 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:25:17.919104 2478201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:25:17.938827 2478201 ssh_runner.go:195] Run: openssl version
	I1101 09:25:17.950355 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:25:17.959502 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.967786 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.968047 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:25:18.027946 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:25:18.041070 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:25:18.062259 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.073714 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.073842 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.156862 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:25:18.169186 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:25:18.188378 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.192396 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.192537 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.245528 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:25:18.261162 2478201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:25:18.270293 2478201 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:25:18.270393 2478201 kubeadm.go:401] StartCluster: {Name:force-systemd-env-778652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:18.270492 2478201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:25:18.270603 2478201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:25:18.318883 2478201 cri.go:89] found id: ""
	I1101 09:25:18.319024 2478201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:25:18.333166 2478201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:25:18.346662 2478201 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:25:18.346776 2478201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:25:18.360007 2478201 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:25:18.360074 2478201 kubeadm.go:158] found existing configuration files:
	
	I1101 09:25:18.360154 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:25:18.369840 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:25:18.369901 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:25:18.388463 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:25:18.402242 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:25:18.402303 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:25:18.412915 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:25:18.424931 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:25:18.424995 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:25:18.435603 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:25:18.446183 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:25:18.446283 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:25:18.455414 2478201 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:25:18.540241 2478201 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:25:18.540687 2478201 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:25:18.581492 2478201 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:25:18.581620 2478201 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:25:18.581690 2478201 kubeadm.go:319] OS: Linux
	I1101 09:25:18.581764 2478201 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:25:18.581847 2478201 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:25:18.581922 2478201 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:25:18.582005 2478201 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:25:18.582081 2478201 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:25:18.582160 2478201 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:25:18.582231 2478201 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:25:18.582313 2478201 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:25:18.582385 2478201 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:25:18.700646 2478201 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:25:18.700825 2478201 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:25:18.700961 2478201 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:25:18.712625 2478201 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:25:18.717205 2478201 out.go:252]   - Generating certificates and keys ...
	I1101 09:25:18.717378 2478201 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:25:18.717483 2478201 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:25:19.234668 2478201 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:25:19.449985 2478201 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:25:19.908487 2478201 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:25:20.311042 2478201 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:25:21.045298 2478201 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:25:21.045587 2478201 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-778652 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:25:18.333639 2478398 node_ready.go:49] node "pause-951206" is "Ready"
	I1101 09:25:18.333661 2478398 node_ready.go:38] duration metric: took 5.512804866s for node "pause-951206" to be "Ready" ...
	I1101 09:25:18.333675 2478398 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:25:18.333714 2478398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:25:18.372285 2478398 api_server.go:72] duration metric: took 5.952497015s to wait for apiserver process to appear ...
	I1101 09:25:18.372306 2478398 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:25:18.372324 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:18.445620 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:25:18.445700 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:25:18.872937 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:18.908919 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:25:18.908950 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:25:19.372389 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:19.444158 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:25:19.444194 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:25:19.872607 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:19.884767 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:25:19.886092 2478398 api_server.go:141] control plane version: v1.34.1
	I1101 09:25:19.886158 2478398 api_server.go:131] duration metric: took 1.51384467s to wait for apiserver health ...
	I1101 09:25:19.886181 2478398 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:25:19.893771 2478398 system_pods.go:59] 7 kube-system pods found
	I1101 09:25:19.893856 2478398 system_pods.go:61] "coredns-66bc5c9577-5vztm" [39f1f2f9-c206-4cc0-a799-cb547db90061] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:25:19.893880 2478398 system_pods.go:61] "etcd-pause-951206" [af1f8a25-028b-424a-a524-af4906d319bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:25:19.893900 2478398 system_pods.go:61] "kindnet-q9r8f" [8791b18b-4128-40c1-961b-b9eb8bb798e0] Running
	I1101 09:25:19.893936 2478398 system_pods.go:61] "kube-apiserver-pause-951206" [9a509e39-ef64-4407-843e-ad2b7b26a20e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:25:19.893957 2478398 system_pods.go:61] "kube-controller-manager-pause-951206" [533a5767-6f17-486a-b374-bb30467f69f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:25:19.893977 2478398 system_pods.go:61] "kube-proxy-6ttp4" [90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3] Running
	I1101 09:25:19.894008 2478398 system_pods.go:61] "kube-scheduler-pause-951206" [533a1e65-747b-46ca-9b99-a0935428abf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:25:19.894035 2478398 system_pods.go:74] duration metric: took 7.834893ms to wait for pod list to return data ...
	I1101 09:25:19.894059 2478398 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:25:19.897237 2478398 default_sa.go:45] found service account: "default"
	I1101 09:25:19.897311 2478398 default_sa.go:55] duration metric: took 3.219841ms for default service account to be created ...
	I1101 09:25:19.897336 2478398 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:25:19.901185 2478398 system_pods.go:86] 7 kube-system pods found
	I1101 09:25:19.901266 2478398 system_pods.go:89] "coredns-66bc5c9577-5vztm" [39f1f2f9-c206-4cc0-a799-cb547db90061] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:25:19.901300 2478398 system_pods.go:89] "etcd-pause-951206" [af1f8a25-028b-424a-a524-af4906d319bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:25:19.901321 2478398 system_pods.go:89] "kindnet-q9r8f" [8791b18b-4128-40c1-961b-b9eb8bb798e0] Running
	I1101 09:25:19.901372 2478398 system_pods.go:89] "kube-apiserver-pause-951206" [9a509e39-ef64-4407-843e-ad2b7b26a20e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:25:19.901399 2478398 system_pods.go:89] "kube-controller-manager-pause-951206" [533a5767-6f17-486a-b374-bb30467f69f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:25:19.901417 2478398 system_pods.go:89] "kube-proxy-6ttp4" [90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3] Running
	I1101 09:25:19.901453 2478398 system_pods.go:89] "kube-scheduler-pause-951206" [533a1e65-747b-46ca-9b99-a0935428abf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:25:19.901476 2478398 system_pods.go:126] duration metric: took 4.121592ms to wait for k8s-apps to be running ...
	I1101 09:25:19.901497 2478398 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:25:19.901589 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:25:19.947756 2478398 system_svc.go:56] duration metric: took 46.23651ms WaitForService to wait for kubelet
	I1101 09:25:19.947835 2478398 kubeadm.go:587] duration metric: took 7.528082282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:25:19.947904 2478398 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:25:19.954381 2478398 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:25:19.954463 2478398 node_conditions.go:123] node cpu capacity is 2
	I1101 09:25:19.954490 2478398 node_conditions.go:105] duration metric: took 6.56543ms to run NodePressure ...
	I1101 09:25:19.954516 2478398 start.go:242] waiting for startup goroutines ...
	I1101 09:25:19.954547 2478398 start.go:247] waiting for cluster config update ...
	I1101 09:25:19.954569 2478398 start.go:256] writing updated cluster config ...
	I1101 09:25:19.954954 2478398 ssh_runner.go:195] Run: rm -f paused
	I1101 09:25:19.960155 2478398 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:25:19.960827 2478398 kapi.go:59] client config for pause-951206: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key", CAFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:25:19.964727 2478398 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5vztm" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:25:21.977054 2478398 pod_ready.go:104] pod "coredns-66bc5c9577-5vztm" is not "Ready", error: <nil>
	I1101 09:25:21.461458 2478201 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:25:21.461916 2478201 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-778652 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:25:22.005636 2478201 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:25:22.745383 2478201 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:25:23.021622 2478201 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:25:23.021941 2478201 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:25:23.742848 2478201 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:25:24.022443 2478201 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:25:25.037147 2478201 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:25:25.361003 2478201 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:25:26.403235 2478201 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:25:26.404039 2478201 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:25:26.406613 2478201 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:25:26.410165 2478201 out.go:252]   - Booting up control plane ...
	I1101 09:25:26.410287 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:25:26.410380 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:25:26.410451 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	W1101 09:25:24.470985 2478398 pod_ready.go:104] pod "coredns-66bc5c9577-5vztm" is not "Ready", error: <nil>
	I1101 09:25:24.972306 2478398 pod_ready.go:94] pod "coredns-66bc5c9577-5vztm" is "Ready"
	I1101 09:25:24.972328 2478398 pod_ready.go:86] duration metric: took 5.00753326s for pod "coredns-66bc5c9577-5vztm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:24.975547 2478398 pod_ready.go:83] waiting for pod "etcd-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:25:26.981914 2478398 pod_ready.go:104] pod "etcd-pause-951206" is not "Ready", error: <nil>
	I1101 09:25:26.431943 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:25:26.432571 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:25:26.440993 2478201 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:25:26.441494 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:25:26.441754 2478201 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:25:26.567699 2478201 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:25:26.567826 2478201 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:25:28.072438 2478201 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501758088s
	I1101 09:25:28.073196 2478201 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:25:28.073405 2478201 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:25:28.073507 2478201 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:25:28.074015 2478201 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 09:25:29.480831 2478398 pod_ready.go:104] pod "etcd-pause-951206" is not "Ready", error: <nil>
	I1101 09:25:30.481001 2478398 pod_ready.go:94] pod "etcd-pause-951206" is "Ready"
	I1101 09:25:30.481072 2478398 pod_ready.go:86] duration metric: took 5.505505127s for pod "etcd-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.483764 2478398 pod_ready.go:83] waiting for pod "kube-apiserver-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.488042 2478398 pod_ready.go:94] pod "kube-apiserver-pause-951206" is "Ready"
	I1101 09:25:30.488111 2478398 pod_ready.go:86] duration metric: took 4.290695ms for pod "kube-apiserver-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.490383 2478398 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.496062 2478398 pod_ready.go:94] pod "kube-controller-manager-pause-951206" is "Ready"
	I1101 09:25:31.496089 2478398 pod_ready.go:86] duration metric: took 1.005641528s for pod "kube-controller-manager-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.498309 2478398 pod_ready.go:83] waiting for pod "kube-proxy-6ttp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.506488 2478398 pod_ready.go:94] pod "kube-proxy-6ttp4" is "Ready"
	I1101 09:25:31.506516 2478398 pod_ready.go:86] duration metric: took 8.182403ms for pod "kube-proxy-6ttp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.678426 2478398 pod_ready.go:83] waiting for pod "kube-scheduler-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:32.078673 2478398 pod_ready.go:94] pod "kube-scheduler-pause-951206" is "Ready"
	I1101 09:25:32.078698 2478398 pod_ready.go:86] duration metric: took 400.244497ms for pod "kube-scheduler-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:32.078710 2478398 pod_ready.go:40] duration metric: took 12.1184744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:25:32.181714 2478398 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:25:32.184832 2478398 out.go:179] * Done! kubectl is now configured to use "pause-951206" cluster and "default" namespace by default
	I1101 09:25:34.065663 2478201 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.991521003s
	I1101 09:25:34.095524 2478201 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.021298178s
	I1101 09:25:35.075367 2478201 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001847503s
	I1101 09:25:35.117105 2478201 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:25:35.146630 2478201 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:25:35.164811 2478201 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:25:35.165037 2478201 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-778652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:25:35.184678 2478201 kubeadm.go:319] [bootstrap-token] Using token: 5txf5d.psrfomc8ixrxkuec
	
	
	==> CRI-O <==
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.636094223Z" level=info msg="Created container 15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd: kube-system/coredns-66bc5c9577-5vztm/coredns" id=709372da-2d9b-432f-945e-ff42d9440f49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.637790348Z" level=info msg="Starting container: 15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd" id=eb64c016-c59b-4616-b972-bc94071833c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.640266128Z" level=info msg="Started container" PID=2324 containerID=e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d description=kube-system/kube-controller-manager-pause-951206/kube-controller-manager id=c9204883-9398-43a0-9e55-1c3ca64fd0a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ea6f902d3547475bdf99d567eb4f46e408cb0619bdd0817724bc295a3f9290d
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.653207824Z" level=info msg="Started container" PID=2359 containerID=15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd description=kube-system/coredns-66bc5c9577-5vztm/coredns id=eb64c016-c59b-4616-b972-bc94071833c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae16d8e0af76ccb45d54cbd891aed89ebe76a7a7c923c53e3516939805ed1fe0
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.72600847Z" level=info msg="Created container ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676: kube-system/kindnet-q9r8f/kindnet-cni" id=0ff178fe-0a72-41cd-9195-7d1e28ed5ca6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.728059113Z" level=info msg="Starting container: ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676" id=549b1249-36b8-40b9-8635-331576faa61e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.729791561Z" level=info msg="Started container" PID=2370 containerID=ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676 description=kube-system/kindnet-q9r8f/kindnet-cni id=549b1249-36b8-40b9-8635-331576faa61e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f54cb18308f125fb7d1e079dda1b48e046811714b2b4fcf77a8f5d7c3586bc5
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.583003884Z" level=info msg="Created container 72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27: kube-system/kube-proxy-6ttp4/kube-proxy" id=4af0f039-6c33-4fec-aaf6-5f3d6e316d5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.588581674Z" level=info msg="Starting container: 72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27" id=1613ae5c-2ab4-4c83-8bae-8e93ceb4b829 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.591293886Z" level=info msg="Started container" PID=2358 containerID=72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27 description=kube-system/kube-proxy-6ttp4/kube-proxy id=1613ae5c-2ab4-4c83-8bae-8e93ceb4b829 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49fc64fa1c2272a5b1191914ddb190d5f6bf9005f62e075b7faa9a43183a9d13
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.055786109Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.06076737Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.060987015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.061061598Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.068185535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.068345424Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.0684178Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.084174351Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.08440065Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.08449897Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092157834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092314671Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092388096Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.105405228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.105564961Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ea40b56f7de32       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   9f54cb18308f1       kindnet-q9r8f                          kube-system
	15815aa5b1fd6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   ae16d8e0af76c       coredns-66bc5c9577-5vztm               kube-system
	72980c4c54ed2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   49fc64fa1c227       kube-proxy-6ttp4                       kube-system
	488f329d3b067       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            1                   c0b1e2bdfa664       kube-scheduler-pause-951206            kube-system
	e0e2262ea0f4e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   0ea6f902d3547       kube-controller-manager-pause-951206   kube-system
	1e267b46f1d9e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   fa74cf5479df0       kube-apiserver-pause-951206            kube-system
	151ac9fa21126       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   4f21e872f3d55       etcd-pause-951206                      kube-system
	8c588c62b138b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   ae16d8e0af76c       coredns-66bc5c9577-5vztm               kube-system
	b504ec0758f08       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9f54cb18308f1       kindnet-q9r8f                          kube-system
	ae4144162ff99       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   49fc64fa1c227       kube-proxy-6ttp4                       kube-system
	0cdb4df035719       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   fa74cf5479df0       kube-apiserver-pause-951206            kube-system
	c288732659394       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4f21e872f3d55       etcd-pause-951206                      kube-system
	50e4c19be115c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0ea6f902d3547       kube-controller-manager-pause-951206   kube-system
	c6f012ce8b285       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   c0b1e2bdfa664       kube-scheduler-pause-951206            kube-system
	
	
	==> coredns [15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52224 - 56883 "HINFO IN 2270440970023884154.8025078414390256880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0564672s
	
	
	==> coredns [8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36078 - 24232 "HINFO IN 7947653426953632850.3437096107154526178. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012576989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-951206
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-951206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-951206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:24:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-951206
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:25:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-951206
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1b53ea08-0f20-424e-bf47-e4d9e80e497e
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5vztm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-951206                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-q9r8f                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-951206             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-951206    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-6ttp4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-951206             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-951206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-951206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s (x8 over 96s)  kubelet          Node pause-951206 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-951206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-951206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-951206 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-951206 event: Registered Node pause-951206 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-951206 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-951206 event: Registered Node pause-951206 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:00] overlayfs: idmapped layers are currently not supported
	[  +4.169917] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:01] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [151ac9fa211263a6da9c02d44bf9df8af1a169e8ad976bb46ffd74c1cc8a3b89] <==
	{"level":"warn","ts":"2025-11-01T09:25:15.957027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:15.986027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.048184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.082607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.100127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.116808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.159712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.198231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.211533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.234290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.304753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.342040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.358224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.388784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.403497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.420315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.452160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.475990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.495781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.539297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.570624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.594718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.628443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.656230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.793764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	
	
	==> etcd [c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c] <==
	{"level":"warn","ts":"2025-11-01T09:24:07.633816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.647932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.682986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.794837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.808486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.818022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.933147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:25:04.203165Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:25:04.203232Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-951206","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:25:04.203326Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:25:04.203369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878758Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:25:06.878749Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-01T09:25:06.878768Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T09:25:06.878782Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-11-01T09:25:06.878791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:25:06.878804Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T09:25:06.878851Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:25:06.884592Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-01T09:25:06.884729Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:25:06.884791Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:25:06.884840Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-951206","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 09:25:36 up 18:08,  0 user,  load average: 4.09, 3.71, 2.75
	Linux pause-951206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834] <==
	I1101 09:24:18.755471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:24:18.755808       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:24:18.756017       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:24:18.756062       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:24:18.756099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:24:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:24:18.965840       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:24:18.965935       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:24:18.965968       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:24:18.966954       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:24:48.966043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:24:48.967064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:24:48.967236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:24:48.967317       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:24:50.466936       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:24:50.467043       1 metrics.go:72] Registering metrics
	I1101 09:24:50.467144       1 controller.go:711] "Syncing nftables rules"
	I1101 09:24:58.971940       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:24:58.971979       1 main.go:301] handling current node
	
	
	==> kindnet [ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676] <==
	I1101 09:25:12.871129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:25:12.884161       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:25:12.884408       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:25:12.884459       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:25:12.884498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:25:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:25:13.055473       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:25:13.055500       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:25:13.055512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:25:13.056680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:25:18.561269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:25:18.561384       1 metrics.go:72] Registering metrics
	I1101 09:25:18.561498       1 controller.go:711] "Syncing nftables rules"
	I1101 09:25:23.055260       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:25:23.055437       1 main.go:301] handling current node
	I1101 09:25:33.055557       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:25:33.055602       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d] <==
	W1101 09:25:05.265100       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265136       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265175       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265211       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265257       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265273       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265298       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265319       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265338       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265362       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265383       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.494465       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.529372       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.573193       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.573193       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.579652       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.580887       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.585254       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.599951       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.602295       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.635264       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.645807       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.648264       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.670190       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.674792       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [1e267b46f1d9e93b1931162cf060aaf69731ece8d757ffde3a2582cfd7651ffb] <==
	I1101 09:25:18.466266       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:25:18.508206       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:25:18.520733       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:25:18.534145       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:25:18.534266       1 policy_source.go:240] refreshing policies
	I1101 09:25:18.537182       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:25:18.543662       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:25:18.550501       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:25:18.559591       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:25:18.559633       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:25:18.577239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:25:18.602926       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:25:18.603234       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:25:18.632412       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:25:18.632509       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:25:18.632554       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:25:18.632599       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:25:18.634246       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:25:18.668815       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:25:18.862955       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:25:20.382319       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:25:21.832643       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:25:21.869137       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:25:22.015659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:25:22.066356       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a] <==
	I1101 09:24:17.300547       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:24:17.301756       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:24:17.301866       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-951206"
	I1101 09:24:17.301937       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:24:17.302375       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:24:17.313036       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-951206" podCIDRs=["10.244.0.0/24"]
	I1101 09:24:17.313663       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:24:17.318790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:24:17.325035       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:24:17.329594       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:24:17.335622       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:24:17.335720       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:24:17.335728       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:24:17.335747       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:24:17.337416       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:24:17.337481       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:24:17.343720       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:24:17.343776       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:24:17.347891       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:24:17.347954       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:24:17.347971       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:24:17.348346       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:24:17.366334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:24:17.380046       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:25:02.309239       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d] <==
	I1101 09:25:21.751322       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:25:21.751759       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:25:21.753635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:25:21.753733       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:25:21.753745       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:25:21.753762       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:25:21.757232       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:25:21.758871       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:25:21.758954       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:25:21.759843       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:25:21.760083       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:25:21.760338       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:25:21.760419       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:25:21.763611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:25:21.763730       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:25:21.772562       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:25:21.772929       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:25:21.773115       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:25:21.773299       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-951206"
	I1101 09:25:21.773386       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:25:21.801361       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:25:21.805558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:25:21.805649       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:25:21.805684       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:25:21.802387       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	
	
	==> kube-proxy [72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27] <==
	I1101 09:25:14.160083       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:25:15.962005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:25:18.696467       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:25:18.711888       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:25:18.736719       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:25:20.737407       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:25:20.737467       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:25:21.011930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:25:21.021965       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:25:21.079933       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:25:21.081343       1 config.go:200] "Starting service config controller"
	I1101 09:25:21.139917       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:25:21.140004       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:25:21.140016       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:25:21.140031       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:25:21.140041       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:25:21.140782       1 config.go:309] "Starting node config controller"
	I1101 09:25:21.140801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:25:21.140808       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:25:21.270087       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:25:21.275914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:25:21.348160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5] <==
	I1101 09:24:18.772635       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:24:18.851524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:24:18.953838       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:24:18.953875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:24:18.953968       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:24:19.013267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:24:19.013325       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:24:19.026266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:24:19.026603       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:24:19.026620       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:24:19.027655       1 config.go:200] "Starting service config controller"
	I1101 09:24:19.027665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:24:19.030209       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:24:19.030237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:24:19.030280       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:24:19.030285       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:24:19.030922       1 config.go:309] "Starting node config controller"
	I1101 09:24:19.030930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:24:19.030935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:24:19.128173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:24:19.130767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:24:19.130802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [488f329d3b067e2efa5f2037a5cad268b4ba316f7b54011aaa9c30ec6aee51cc] <==
	I1101 09:25:18.377839       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:25:22.598534       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:25:22.598567       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:25:22.604211       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:25:22.604316       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:25:22.604332       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:25:22.604358       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:25:22.620289       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:22.620315       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:22.620338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.620351       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.704428       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:25:22.721384       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.721463       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8] <==
	I1101 09:24:08.947964       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:24:11.708728       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:24:11.708763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:24:11.713924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:24:11.714001       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:24:11.714027       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:24:11.714056       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:24:11.741274       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:24:11.741308       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:24:11.741520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.741534       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.814684       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:24:11.842063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.842135       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:04.222808       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:25:04.222894       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:25:04.225481       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:25:04.227644       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:04.227729       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:04.227777       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1101 09:25:04.232920       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:25:04.233001       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:25:12 pause-951206 kubelet[1296]: I1101 09:25:12.442139    1296 scope.go:117] "RemoveContainer" containerID="8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442565    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5vztm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39f1f2f9-c206-4cc0-a799-cb547db90061" pod="kube-system/coredns-66bc5c9577-5vztm"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442719    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f4a72daa8226daa8a8c8bca246dc7993" pod="kube-system/kube-controller-manager-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442861    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22850738db4b57fe25790f0b6f614526" pod="kube-system/etcd-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442997    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6977649d9e2ac633ed5a82c18c6cd213" pod="kube-system/kube-apiserver-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443128    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0b4a3cb8eda2a2cbd443a0370aa2b9cd" pod="kube-system/kube-scheduler-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443262    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ttp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3" pod="kube-system/kube-proxy-6ttp4"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443391    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-q9r8f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8791b18b-4128-40c1-961b-b9eb8bb798e0" pod="kube-system/kindnet-q9r8f"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.444736    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="f4a72daa8226daa8a8c8bca246dc7993" pod="kube-system/kube-controller-manager-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448281    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448354    1296 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448370    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.465932    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="22850738db4b57fe25790f0b6f614526" pod="kube-system/etcd-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.494142    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="6977649d9e2ac633ed5a82c18c6cd213" pod="kube-system/kube-apiserver-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.496256    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="0b4a3cb8eda2a2cbd443a0370aa2b9cd" pod="kube-system/kube-scheduler-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.504960    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-6ttp4\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3" pod="kube-system/kube-proxy-6ttp4"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.512624    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-q9r8f\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="8791b18b-4128-40c1-961b-b9eb8bb798e0" pod="kube-system/kindnet-q9r8f"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.521044    1296 status_manager.go:1018] "Failed to get status for pod" err=<
	Nov 01 09:25:18 pause-951206 kubelet[1296]:         pods "coredns-66bc5c9577-5vztm" is forbidden: User "system:node:pause-951206" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-951206' and this object
	Nov 01 09:25:18 pause-951206 kubelet[1296]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Nov 01 09:25:18 pause-951206 kubelet[1296]:  > podUID="39f1f2f9-c206-4cc0-a799-cb547db90061" pod="kube-system/coredns-66bc5c9577-5vztm"
	Nov 01 09:25:23 pause-951206 kubelet[1296]: W1101 09:25:23.284925    1296 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 09:25:32 pause-951206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:25:32 pause-951206 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:25:32 pause-951206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-951206 -n pause-951206
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-951206 -n pause-951206: exit status 2 (675.380836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-951206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-951206
helpers_test.go:243: (dbg) docker inspect pause-951206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a",
	        "Created": "2025-11-01T09:23:42.617544868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2471153,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:23:42.690792213Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/hostname",
	        "HostsPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/hosts",
	        "LogPath": "/var/lib/docker/containers/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a/abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a-json.log",
	        "Name": "/pause-951206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-951206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-951206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "abfc5fa13ffc725484d745220e19d608d4a1d831946506e06159dfda90300c7a",
	                "LowerDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67650ff6703aef5979c767bdfabcb5b7fa22a3f0bc789d102f9a086ad487e913/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-951206",
	                "Source": "/var/lib/docker/volumes/pause-951206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-951206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-951206",
	                "name.minikube.sigs.k8s.io": "pause-951206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34752ceea37b121cdb9cdf5063c0688acb7a287623ef676bf254afeefb206183",
	            "SandboxKey": "/var/run/docker/netns/34752ceea37b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36310"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36311"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36312"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36313"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-951206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:49:1e:6c:2e:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94f1ea4f501e9b1fe920324e249aba2358057c0615454cb6a22317732f3b8aad",
	                    "EndpointID": "2dbb3722a6393f7065b9be3cf10256db6a24ca82887c18688d0a171319e0861f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-951206",
	                        "abfc5fa13ffc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-951206 -n pause-951206
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-951206 -n pause-951206: exit status 2 (478.001051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-951206 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-951206 logs -n 25: (1.75122226s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-206273 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status docker --all --full --no-pager                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat docker --no-pager                                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/docker/daemon.json                                                          │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo docker system info                                                                   │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:24 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cri-dockerd --version                                                                │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat containerd --no-pager                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/containerd/config.toml                                                      │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo containerd config dump                                                               │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status crio --all --full --no-pager                                        │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat crio --no-pager                                                        │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo crio config                                                                          │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p cilium-206273                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                     │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:25:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:25:02.268798 2478398 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:25:02.268919 2478398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:02.268924 2478398 out.go:374] Setting ErrFile to fd 2...
	I1101 09:25:02.268928 2478398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:25:02.269165 2478398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:25:02.269530 2478398 out.go:368] Setting JSON to false
	I1101 09:25:02.270522 2478398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65248,"bootTime":1761923854,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:25:02.270585 2478398 start.go:143] virtualization:  
	I1101 09:25:02.274033 2478398 out.go:179] * [pause-951206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:25:02.278200 2478398 notify.go:221] Checking for updates...
	I1101 09:25:02.282648 2478398 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:25:02.288397 2478398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:25:02.291563 2478398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:25:02.294633 2478398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:25:02.298354 2478398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:25:02.301539 2478398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:25:02.304965 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:02.305513 2478398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:25:02.352707 2478398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:25:02.352821 2478398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:25:02.439228 2478398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:58 SystemTime:2025-11-01 09:25:02.416182009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:25:02.439339 2478398 docker.go:319] overlay module found
	I1101 09:25:02.443015 2478398 out.go:179] * Using the docker driver based on existing profile
	I1101 09:25:02.445878 2478398 start.go:309] selected driver: docker
	I1101 09:25:02.445902 2478398 start.go:930] validating driver "docker" against &{Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:02.446084 2478398 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:25:02.446208 2478398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:25:02.532824 2478398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:25:02.523640442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:25:02.533331 2478398 cni.go:84] Creating CNI manager for ""
	I1101 09:25:02.533407 2478398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:02.533465 2478398 start.go:353] cluster config:
	{Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:02.539887 2478398 out.go:179] * Starting "pause-951206" primary control-plane node in "pause-951206" cluster
	I1101 09:25:02.544035 2478398 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:25:02.548059 2478398 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:25:02.552130 2478398 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:02.552137 2478398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:25:02.552192 2478398 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:25:02.552208 2478398 cache.go:59] Caching tarball of preloaded images
	I1101 09:25:02.552293 2478398 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:25:02.552302 2478398 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:25:02.552443 2478398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/config.json ...
	I1101 09:25:02.584407 2478398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:25:02.584435 2478398 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:25:02.584448 2478398 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:25:02.584586 2478398 start.go:360] acquireMachinesLock for pause-951206: {Name:mkdc7ab99ea2756e15d5e7197b949eac20411fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:25:02.584658 2478398 start.go:364] duration metric: took 40.196µs to acquireMachinesLock for "pause-951206"
	I1101 09:25:02.584683 2478398 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:25:02.584692 2478398 fix.go:54] fixHost starting: 
	I1101 09:25:02.584953 2478398 cli_runner.go:164] Run: docker container inspect pause-951206 --format={{.State.Status}}
	I1101 09:25:02.621156 2478398 fix.go:112] recreateIfNeeded on pause-951206: state=Running err=<nil>
	W1101 09:25:02.621193 2478398 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:25:01.712582 2478201 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:25:01.712904 2478201 start.go:159] libmachine.API.Create for "force-systemd-env-778652" (driver="docker")
	I1101 09:25:01.712948 2478201 client.go:173] LocalClient.Create starting
	I1101 09:25:01.713017 2478201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:25:01.713055 2478201 main.go:143] libmachine: Decoding PEM data...
	I1101 09:25:01.713076 2478201 main.go:143] libmachine: Parsing certificate...
	I1101 09:25:01.713141 2478201 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:25:01.713163 2478201 main.go:143] libmachine: Decoding PEM data...
	I1101 09:25:01.713184 2478201 main.go:143] libmachine: Parsing certificate...
	I1101 09:25:01.713587 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:25:01.732633 2478201 cli_runner.go:211] docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:25:01.732743 2478201 network_create.go:284] running [docker network inspect force-systemd-env-778652] to gather additional debugging logs...
	I1101 09:25:01.732770 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652
	W1101 09:25:01.747835 2478201 cli_runner.go:211] docker network inspect force-systemd-env-778652 returned with exit code 1
	I1101 09:25:01.748003 2478201 network_create.go:287] error running [docker network inspect force-systemd-env-778652]: docker network inspect force-systemd-env-778652: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-778652 not found
	I1101 09:25:01.748022 2478201 network_create.go:289] output of [docker network inspect force-systemd-env-778652]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-778652 not found
	
	** /stderr **
	I1101 09:25:01.748251 2478201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:01.767056 2478201 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:25:01.767453 2478201 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:25:01.767822 2478201 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:25:01.768395 2478201 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197bc40}
	I1101 09:25:01.768423 2478201 network_create.go:124] attempt to create docker network force-systemd-env-778652 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:25:01.768500 2478201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-778652 force-systemd-env-778652
	I1101 09:25:01.839131 2478201 network_create.go:108] docker network force-systemd-env-778652 192.168.76.0/24 created
	I1101 09:25:01.839165 2478201 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-778652" container
	I1101 09:25:01.839243 2478201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:25:01.866455 2478201 cli_runner.go:164] Run: docker volume create force-systemd-env-778652 --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:25:01.886587 2478201 oci.go:103] Successfully created a docker volume force-systemd-env-778652
	I1101 09:25:01.886681 2478201 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-778652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --entrypoint /usr/bin/test -v force-systemd-env-778652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:25:02.515832 2478201 oci.go:107] Successfully prepared a docker volume force-systemd-env-778652
	I1101 09:25:02.515894 2478201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:02.515914 2478201 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:25:02.515980 2478201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-778652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:25:02.624713 2478398 out.go:252] * Updating the running docker "pause-951206" container ...
	I1101 09:25:02.624752 2478398 machine.go:94] provisionDockerMachine start ...
	I1101 09:25:02.624844 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:02.647678 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:02.648031 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:02.648058 2478398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:25:02.835632 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-951206
	
	I1101 09:25:02.835656 2478398 ubuntu.go:182] provisioning hostname "pause-951206"
	I1101 09:25:02.835728 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:02.867627 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:02.867984 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:02.868001 2478398 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-951206 && echo "pause-951206" | sudo tee /etc/hostname
	I1101 09:25:03.055577 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-951206
	
	I1101 09:25:03.055663 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:03.086683 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:03.087070 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:03.087094 2478398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-951206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-951206/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-951206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:25:03.264576 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:25:03.264603 2478398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:25:03.264642 2478398 ubuntu.go:190] setting up certificates
	I1101 09:25:03.264657 2478398 provision.go:84] configureAuth start
	I1101 09:25:03.264719 2478398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-951206
	I1101 09:25:03.289653 2478398 provision.go:143] copyHostCerts
	I1101 09:25:03.289726 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:25:03.289747 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:03.289823 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:25:03.289915 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:25:03.289926 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:03.289955 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:25:03.290009 2478398 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:25:03.290017 2478398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:03.290041 2478398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:25:03.290089 2478398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.pause-951206 san=[127.0.0.1 192.168.85.2 localhost minikube pause-951206]
	I1101 09:25:03.784606 2478398 provision.go:177] copyRemoteCerts
	I1101 09:25:03.784674 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:25:03.784717 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:03.802621 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:03.920168 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:25:03.955125 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:25:03.974650 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:25:03.994861 2478398 provision.go:87] duration metric: took 730.176559ms to configureAuth
	I1101 09:25:03.994897 2478398 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:25:03.995137 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:03.995275 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:04.018477 2478398 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:04.018838 2478398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36310 <nil> <nil>}
	I1101 09:25:04.018860 2478398 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:25:09.383979 2478398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:25:09.384005 2478398 machine.go:97] duration metric: took 6.759244613s to provisionDockerMachine
	I1101 09:25:09.384017 2478398 start.go:293] postStartSetup for "pause-951206" (driver="docker")
	I1101 09:25:09.384028 2478398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:25:09.384091 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:25:09.384139 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.401363 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.503468 2478398 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:25:09.506820 2478398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:25:09.506853 2478398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:25:09.506864 2478398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:25:09.506918 2478398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:25:09.507007 2478398 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:25:09.507114 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:25:09.514440 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:09.531947 2478398 start.go:296] duration metric: took 147.915591ms for postStartSetup
	I1101 09:25:09.532038 2478398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:25:09.532085 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.549027 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.648817 2478398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:25:09.653784 2478398 fix.go:56] duration metric: took 7.069085326s for fixHost
	I1101 09:25:09.653857 2478398 start.go:83] releasing machines lock for "pause-951206", held for 7.069184499s
	I1101 09:25:09.653966 2478398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-951206
	I1101 09:25:09.670153 2478398 ssh_runner.go:195] Run: cat /version.json
	I1101 09:25:09.670205 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.670601 2478398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:25:09.670659 2478398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-951206
	I1101 09:25:09.694592 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.701792 2478398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36310 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/pause-951206/id_rsa Username:docker}
	I1101 09:25:09.878932 2478398 ssh_runner.go:195] Run: systemctl --version
	I1101 09:25:09.885389 2478398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:25:09.922864 2478398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:25:09.927250 2478398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:25:09.927330 2478398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:25:09.935222 2478398 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:25:09.935245 2478398 start.go:496] detecting cgroup driver to use...
	I1101 09:25:09.935276 2478398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:25:09.935340 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:25:09.950765 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:25:09.963664 2478398 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:25:09.963740 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:25:09.978938 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:25:09.992541 2478398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:25:10.131906 2478398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:25:10.270687 2478398 docker.go:234] disabling docker service ...
	I1101 09:25:10.270752 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:25:10.285724 2478398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:25:10.298589 2478398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:25:10.426585 2478398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:25:10.564507 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:25:10.577299 2478398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:25:10.591665 2478398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:25:10.591729 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.600673 2478398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:25:10.600753 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.610184 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.619079 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.628104 2478398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:25:10.636889 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.646330 2478398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.655237 2478398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:10.664017 2478398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:25:10.671438 2478398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:25:10.678831 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:10.808449 2478398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:25:10.978460 2478398 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:25:10.978548 2478398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:25:10.983267 2478398 start.go:564] Will wait 60s for crictl version
	I1101 09:25:10.983333 2478398 ssh_runner.go:195] Run: which crictl
	I1101 09:25:10.986965 2478398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:25:11.015254 2478398 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:25:11.015348 2478398 ssh_runner.go:195] Run: crio --version
	I1101 09:25:11.042945 2478398 ssh_runner.go:195] Run: crio --version
	I1101 09:25:11.075812 2478398 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:25:06.686717 2478201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-778652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.170700539s)
	I1101 09:25:06.686753 2478201 kic.go:203] duration metric: took 4.17083478s to extract preloaded images to volume ...
	W1101 09:25:06.686906 2478201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:25:06.687023 2478201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:25:06.759397 2478201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-778652 --name force-systemd-env-778652 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-778652 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-778652 --network force-systemd-env-778652 --ip 192.168.76.2 --volume force-systemd-env-778652:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:25:07.065610 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Running}}
	I1101 09:25:07.086868 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:07.109141 2478201 cli_runner.go:164] Run: docker exec force-systemd-env-778652 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:25:07.161970 2478201 oci.go:144] the created container "force-systemd-env-778652" has a running status.
	I1101 09:25:07.161998 2478201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa...
	I1101 09:25:08.006371 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 09:25:08.006434 2478201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:25:08.027751 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:08.047931 2478201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:25:08.047958 2478201 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-778652 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:25:08.088582 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:08.107209 2478201 machine.go:94] provisionDockerMachine start ...
	I1101 09:25:08.107316 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:08.124470 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:08.124803 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:08.124817 2478201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:25:08.125463 2478201 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 09:25:11.287948 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-778652
	
	I1101 09:25:11.287977 2478201 ubuntu.go:182] provisioning hostname "force-systemd-env-778652"
	I1101 09:25:11.288073 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:11.313386 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:11.313682 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:11.313698 2478201 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-778652 && echo "force-systemd-env-778652" | sudo tee /etc/hostname
	I1101 09:25:11.078800 2478398 cli_runner.go:164] Run: docker network inspect pause-951206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:11.096172 2478398 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:25:11.100580 2478398 kubeadm.go:884] updating cluster {Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:25:11.100741 2478398 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:11.100808 2478398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:11.142943 2478398 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:11.142977 2478398 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:25:11.143048 2478398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:11.177461 2478398 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:11.177487 2478398 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:25:11.177496 2478398 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 09:25:11.177655 2478398 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-951206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:25:11.177787 2478398 ssh_runner.go:195] Run: crio config
	I1101 09:25:11.259109 2478398 cni.go:84] Creating CNI manager for ""
	I1101 09:25:11.259142 2478398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:11.259161 2478398 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:25:11.259202 2478398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-951206 NodeName:pause-951206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:25:11.259488 2478398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-951206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:25:11.259591 2478398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:25:11.267301 2478398 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:25:11.267401 2478398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:25:11.274776 2478398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 09:25:11.288571 2478398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:25:11.302134 2478398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:25:11.326583 2478398 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:25:11.330260 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:11.497733 2478398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:11.518479 2478398 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206 for IP: 192.168.85.2
	I1101 09:25:11.518501 2478398 certs.go:195] generating shared ca certs ...
	I1101 09:25:11.518517 2478398 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:11.518669 2478398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:25:11.518723 2478398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:25:11.518731 2478398 certs.go:257] generating profile certs ...
	I1101 09:25:11.518809 2478398 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key
	I1101 09:25:11.518879 2478398 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.key.55d03f72
	I1101 09:25:11.518918 2478398 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.key
	I1101 09:25:11.519025 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:25:11.519051 2478398 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:25:11.519058 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:25:11.519087 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:25:11.519111 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:25:11.519131 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:25:11.519172 2478398 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:11.519748 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:25:11.558043 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:25:11.578895 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:25:11.603249 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:25:11.628368 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:25:11.647093 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:25:11.665961 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:25:11.682381 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:25:11.702756 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:25:11.723072 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:25:11.740988 2478398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:25:11.760282 2478398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:25:11.773510 2478398 ssh_runner.go:195] Run: openssl version
	I1101 09:25:11.780020 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:25:11.788191 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.792216 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.792276 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:25:11.834292 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:25:11.842125 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:25:11.850176 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.854670 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.854774 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:11.897052 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:25:11.905291 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:25:11.913755 2478398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.918235 2478398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.918312 2478398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:25:11.961677 2478398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:25:11.969727 2478398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:25:11.974292 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:25:12.016776 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:25:12.059319 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:25:12.102433 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:25:12.147326 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:25:12.206188 2478398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:25:12.251551 2478398 kubeadm.go:401] StartCluster: {Name:pause-951206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-951206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:12.251682 2478398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:25:12.251740 2478398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:25:12.331985 2478398 cri.go:89] found id: "8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	I1101 09:25:12.332009 2478398 cri.go:89] found id: "b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834"
	I1101 09:25:12.332014 2478398 cri.go:89] found id: "ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5"
	I1101 09:25:12.332018 2478398 cri.go:89] found id: "0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d"
	I1101 09:25:12.332022 2478398 cri.go:89] found id: "c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c"
	I1101 09:25:12.332025 2478398 cri.go:89] found id: "50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a"
	I1101 09:25:12.332028 2478398 cri.go:89] found id: "c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8"
	I1101 09:25:12.332031 2478398 cri.go:89] found id: ""
	I1101 09:25:12.332077 2478398 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:25:12.359046 2478398 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:25:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:25:12.359129 2478398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:25:12.374100 2478398 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:25:12.374117 2478398 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:25:12.374170 2478398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:25:12.393377 2478398 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:25:12.393915 2478398 kubeconfig.go:125] found "pause-951206" server: "https://192.168.85.2:8443"
	I1101 09:25:12.394473 2478398 kapi.go:59] client config for pause-951206: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key", CAFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:25:12.394957 2478398 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:25:12.394970 2478398 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:25:12.394975 2478398 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:25:12.394986 2478398 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:25:12.394991 2478398 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:25:12.395239 2478398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:25:12.418731 2478398 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:25:12.418762 2478398 kubeadm.go:602] duration metric: took 44.638728ms to restartPrimaryControlPlane
	I1101 09:25:12.418770 2478398 kubeadm.go:403] duration metric: took 167.229312ms to StartCluster
	I1101 09:25:12.418785 2478398 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:12.418846 2478398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:25:12.419527 2478398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:12.419728 2478398 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:25:12.420193 2478398 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:12.420268 2478398 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:25:12.425180 2478398 out.go:179] * Verifying Kubernetes components...
	I1101 09:25:12.425289 2478398 out.go:179] * Enabled addons: 
	I1101 09:25:11.501917 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-778652
	
	I1101 09:25:11.502040 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:11.528063 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:11.528702 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:11.528728 2478201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-778652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-778652/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-778652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:25:11.687720 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:25:11.687816 2478201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:25:11.687930 2478201 ubuntu.go:190] setting up certificates
	I1101 09:25:11.687966 2478201 provision.go:84] configureAuth start
	I1101 09:25:11.688041 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:11.711320 2478201 provision.go:143] copyHostCerts
	I1101 09:25:11.711363 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:11.711392 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:25:11.711399 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:25:11.711469 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:25:11.711551 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:11.711568 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:25:11.711572 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:25:11.711596 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:25:11.711644 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:11.711660 2478201 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:25:11.711664 2478201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:25:11.711687 2478201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:25:11.711761 2478201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-778652 san=[127.0.0.1 192.168.76.2 force-systemd-env-778652 localhost minikube]
	I1101 09:25:12.136569 2478201 provision.go:177] copyRemoteCerts
	I1101 09:25:12.136693 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:25:12.136760 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.157563 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:12.267890 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 09:25:12.267960 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:25:12.298413 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 09:25:12.298480 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 09:25:12.322050 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 09:25:12.322118 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:25:12.347137 2478201 provision.go:87] duration metric: took 659.134296ms to configureAuth
	I1101 09:25:12.347206 2478201 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:25:12.347427 2478201 config.go:182] Loaded profile config "force-systemd-env-778652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:12.347573 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.375664 2478201 main.go:143] libmachine: Using SSH client type: native
	I1101 09:25:12.376004 2478201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36320 <nil> <nil>}
	I1101 09:25:12.376020 2478201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:25:12.757729 2478201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:25:12.757753 2478201 machine.go:97] duration metric: took 4.650518581s to provisionDockerMachine
	I1101 09:25:12.757763 2478201 client.go:176] duration metric: took 11.044803182s to LocalClient.Create
	I1101 09:25:12.757775 2478201 start.go:167] duration metric: took 11.044874844s to libmachine.API.Create "force-systemd-env-778652"
	I1101 09:25:12.757828 2478201 start.go:293] postStartSetup for "force-systemd-env-778652" (driver="docker")
	I1101 09:25:12.757838 2478201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:25:12.757920 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:25:12.758002 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:12.780702 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.007632 2478201 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:25:13.014659 2478201 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:25:13.014691 2478201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:25:13.014702 2478201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:25:13.014758 2478201 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:25:13.014849 2478201 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:25:13.014861 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> /etc/ssl/certs/23159822.pem
	I1101 09:25:13.014966 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:25:13.030381 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:13.075284 2478201 start.go:296] duration metric: took 317.440741ms for postStartSetup
	I1101 09:25:13.075669 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:13.115168 2478201 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/config.json ...
	I1101 09:25:13.115443 2478201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:25:13.115499 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.144637 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.261490 2478201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:25:13.272537 2478201 start.go:128] duration metric: took 11.563329008s to createHost
	I1101 09:25:13.272563 2478201 start.go:83] releasing machines lock for "force-systemd-env-778652", held for 11.563472577s
	I1101 09:25:13.272637 2478201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-778652
	I1101 09:25:13.299999 2478201 ssh_runner.go:195] Run: cat /version.json
	I1101 09:25:13.300055 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.300287 2478201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:25:13.300346 2478201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-778652
	I1101 09:25:13.334765 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.341014 2478201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36320 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/force-systemd-env-778652/id_rsa Username:docker}
	I1101 09:25:13.460062 2478201 ssh_runner.go:195] Run: systemctl --version
	I1101 09:25:13.571125 2478201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:25:13.660212 2478201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:25:13.668910 2478201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:25:13.669012 2478201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:25:13.721483 2478201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:25:13.721508 2478201 start.go:496] detecting cgroup driver to use...
	I1101 09:25:13.721552 2478201 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1101 09:25:13.721631 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:25:13.753843 2478201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:25:13.775938 2478201 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:25:13.776033 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:25:13.806157 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:25:13.834806 2478201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:25:14.045967 2478201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:25:14.248604 2478201 docker.go:234] disabling docker service ...
	I1101 09:25:14.248704 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:25:14.285013 2478201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:25:14.304592 2478201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:25:14.491882 2478201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:25:14.694798 2478201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:25:14.708835 2478201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:25:14.725060 2478201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:25:14.725150 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.742328 2478201 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:25:14.742419 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.754768 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.777234 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.794717 2478201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:25:14.817248 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.842539 2478201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.858318 2478201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:25:14.873794 2478201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:25:14.885804 2478201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:25:14.897231 2478201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:15.098357 2478201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:25:15.301190 2478201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:25:15.301299 2478201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:25:15.308627 2478201 start.go:564] Will wait 60s for crictl version
	I1101 09:25:15.308721 2478201 ssh_runner.go:195] Run: which crictl
	I1101 09:25:15.316225 2478201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:25:15.368105 2478201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:25:15.368216 2478201 ssh_runner.go:195] Run: crio --version
	I1101 09:25:15.419150 2478201 ssh_runner.go:195] Run: crio --version
	I1101 09:25:15.459634 2478201 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:25:15.462594 2478201 cli_runner.go:164] Run: docker network inspect force-systemd-env-778652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:25:15.488089 2478201 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:25:15.492230 2478201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:25:15.501791 2478201 kubeadm.go:884] updating cluster {Name:force-systemd-env-778652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:25:15.501898 2478201 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:25:15.501964 2478201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:15.558755 2478201 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:15.558774 2478201 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:25:15.558828 2478201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:25:15.605557 2478201 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:25:15.605628 2478201 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:25:15.605652 2478201 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:25:15.605783 2478201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-778652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:25:15.605897 2478201 ssh_runner.go:195] Run: crio config
	I1101 09:25:15.690986 2478201 cni.go:84] Creating CNI manager for ""
	I1101 09:25:15.691111 2478201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:15.691147 2478201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:25:15.691196 2478201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-778652 NodeName:force-systemd-env-778652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:25:15.691373 2478201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-778652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:25:15.691487 2478201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:25:15.701753 2478201 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:25:15.701884 2478201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:25:15.711986 2478201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1101 09:25:15.729234 2478201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:25:15.747384 2478201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1101 09:25:15.763438 2478201 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:25:15.767697 2478201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:25:15.777212 2478201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:15.958087 2478201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:16.003118 2478201 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652 for IP: 192.168.76.2
	I1101 09:25:16.003199 2478201 certs.go:195] generating shared ca certs ...
	I1101 09:25:16.003233 2478201 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.003475 2478201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:25:16.003566 2478201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:25:16.003606 2478201 certs.go:257] generating profile certs ...
	I1101 09:25:16.003706 2478201 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key
	I1101 09:25:16.003766 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt with IP's: []
	I1101 09:25:12.427944 2478398 addons.go:515] duration metric: took 7.570013ms for enable addons: enabled=[]
	I1101 09:25:12.428095 2478398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:12.802287 2478398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:25:12.820822 2478398 node_ready.go:35] waiting up to 6m0s for node "pause-951206" to be "Ready" ...
	I1101 09:25:16.537475 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt ...
	I1101 09:25:16.537559 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt: {Name:mk3e0d0b4efbcd31e60ac39b65d28557f5cdc618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.537763 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key ...
	I1101 09:25:16.537810 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key: {Name:mka63bbf35663fd50984ee97e36cece72ef22ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.537931 2478201 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820
	I1101 09:25:16.537979 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:25:16.946351 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 ...
	I1101 09:25:16.946423 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820: {Name:mkcb035b041dd30d5ee448dc9db0a5d2327844bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.946646 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820 ...
	I1101 09:25:16.946684 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820: {Name:mk84d70dc281bc632f90e66ce20e1c4a47e66211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:16.946812 2478201 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt.71763820 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt
	I1101 09:25:16.946924 2478201 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key.71763820 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key
	I1101 09:25:16.947023 2478201 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key
	I1101 09:25:16.947067 2478201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt with IP's: []
	I1101 09:25:17.607390 2478201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt ...
	I1101 09:25:17.607468 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt: {Name:mk94032db39ffa2a9aedeaf857e9bb297469459a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:17.607697 2478201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key ...
	I1101 09:25:17.607735 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key: {Name:mke888c55a7e70dad1d23b1660cd5ea205743208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:17.607874 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 09:25:17.607920 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 09:25:17.607952 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 09:25:17.607986 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 09:25:17.608024 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 09:25:17.608057 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 09:25:17.608093 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 09:25:17.608129 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 09:25:17.608207 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:25:17.608267 2478201 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:25:17.608291 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:25:17.608335 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:25:17.608388 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:25:17.608429 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:25:17.608507 2478201 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:25:17.608574 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> /usr/share/ca-certificates/23159822.pem
	I1101 09:25:17.608613 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:17.608643 2478201 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem -> /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.609242 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:25:17.656717 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:25:17.685370 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:25:17.706194 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:25:17.731455 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 09:25:17.757939 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:25:17.790063 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:25:17.813821 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:25:17.847737 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:25:17.870652 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:25:17.891454 2478201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:25:17.919104 2478201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:25:17.938827 2478201 ssh_runner.go:195] Run: openssl version
	I1101 09:25:17.950355 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:25:17.959502 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.967786 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:25:17.968047 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:25:18.027946 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:25:18.041070 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:25:18.062259 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.073714 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.073842 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:25:18.156862 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:25:18.169186 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:25:18.188378 2478201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.192396 2478201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.192537 2478201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:25:18.245528 2478201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:25:18.261162 2478201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:25:18.270293 2478201 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:25:18.270393 2478201 kubeadm.go:401] StartCluster: {Name:force-systemd-env-778652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-778652 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:25:18.270492 2478201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:25:18.270603 2478201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:25:18.318883 2478201 cri.go:89] found id: ""
	I1101 09:25:18.319024 2478201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:25:18.333166 2478201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:25:18.346662 2478201 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:25:18.346776 2478201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:25:18.360007 2478201 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:25:18.360074 2478201 kubeadm.go:158] found existing configuration files:
	
	I1101 09:25:18.360154 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:25:18.369840 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:25:18.369901 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:25:18.388463 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:25:18.402242 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:25:18.402303 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:25:18.412915 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:25:18.424931 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:25:18.424995 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:25:18.435603 2478201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:25:18.446183 2478201 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:25:18.446283 2478201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:25:18.455414 2478201 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:25:18.540241 2478201 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:25:18.540687 2478201 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:25:18.581492 2478201 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:25:18.581620 2478201 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:25:18.581690 2478201 kubeadm.go:319] OS: Linux
	I1101 09:25:18.581764 2478201 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:25:18.581847 2478201 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:25:18.581922 2478201 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:25:18.582005 2478201 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:25:18.582081 2478201 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:25:18.582160 2478201 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:25:18.582231 2478201 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:25:18.582313 2478201 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:25:18.582385 2478201 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:25:18.700646 2478201 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:25:18.700825 2478201 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:25:18.700961 2478201 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:25:18.712625 2478201 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:25:18.717205 2478201 out.go:252]   - Generating certificates and keys ...
	I1101 09:25:18.717378 2478201 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:25:18.717483 2478201 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:25:19.234668 2478201 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:25:19.449985 2478201 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:25:19.908487 2478201 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:25:20.311042 2478201 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:25:21.045298 2478201 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:25:21.045587 2478201 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-778652 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:25:18.333639 2478398 node_ready.go:49] node "pause-951206" is "Ready"
	I1101 09:25:18.333661 2478398 node_ready.go:38] duration metric: took 5.512804866s for node "pause-951206" to be "Ready" ...
	I1101 09:25:18.333675 2478398 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:25:18.333714 2478398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:25:18.372285 2478398 api_server.go:72] duration metric: took 5.952497015s to wait for apiserver process to appear ...
	I1101 09:25:18.372306 2478398 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:25:18.372324 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:18.445620 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:25:18.445700 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:25:18.872937 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:18.908919 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:25:18.908950 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:25:19.372389 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:19.444158 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:25:19.444194 2478398 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:25:19.872607 2478398 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:25:19.884767 2478398 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:25:19.886092 2478398 api_server.go:141] control plane version: v1.34.1
	I1101 09:25:19.886158 2478398 api_server.go:131] duration metric: took 1.51384467s to wait for apiserver health ...
	I1101 09:25:19.886181 2478398 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:25:19.893771 2478398 system_pods.go:59] 7 kube-system pods found
	I1101 09:25:19.893856 2478398 system_pods.go:61] "coredns-66bc5c9577-5vztm" [39f1f2f9-c206-4cc0-a799-cb547db90061] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:25:19.893880 2478398 system_pods.go:61] "etcd-pause-951206" [af1f8a25-028b-424a-a524-af4906d319bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:25:19.893900 2478398 system_pods.go:61] "kindnet-q9r8f" [8791b18b-4128-40c1-961b-b9eb8bb798e0] Running
	I1101 09:25:19.893936 2478398 system_pods.go:61] "kube-apiserver-pause-951206" [9a509e39-ef64-4407-843e-ad2b7b26a20e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:25:19.893957 2478398 system_pods.go:61] "kube-controller-manager-pause-951206" [533a5767-6f17-486a-b374-bb30467f69f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:25:19.893977 2478398 system_pods.go:61] "kube-proxy-6ttp4" [90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3] Running
	I1101 09:25:19.894008 2478398 system_pods.go:61] "kube-scheduler-pause-951206" [533a1e65-747b-46ca-9b99-a0935428abf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:25:19.894035 2478398 system_pods.go:74] duration metric: took 7.834893ms to wait for pod list to return data ...
	I1101 09:25:19.894059 2478398 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:25:19.897237 2478398 default_sa.go:45] found service account: "default"
	I1101 09:25:19.897311 2478398 default_sa.go:55] duration metric: took 3.219841ms for default service account to be created ...
	I1101 09:25:19.897336 2478398 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:25:19.901185 2478398 system_pods.go:86] 7 kube-system pods found
	I1101 09:25:19.901266 2478398 system_pods.go:89] "coredns-66bc5c9577-5vztm" [39f1f2f9-c206-4cc0-a799-cb547db90061] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:25:19.901300 2478398 system_pods.go:89] "etcd-pause-951206" [af1f8a25-028b-424a-a524-af4906d319bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:25:19.901321 2478398 system_pods.go:89] "kindnet-q9r8f" [8791b18b-4128-40c1-961b-b9eb8bb798e0] Running
	I1101 09:25:19.901372 2478398 system_pods.go:89] "kube-apiserver-pause-951206" [9a509e39-ef64-4407-843e-ad2b7b26a20e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:25:19.901399 2478398 system_pods.go:89] "kube-controller-manager-pause-951206" [533a5767-6f17-486a-b374-bb30467f69f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:25:19.901417 2478398 system_pods.go:89] "kube-proxy-6ttp4" [90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3] Running
	I1101 09:25:19.901453 2478398 system_pods.go:89] "kube-scheduler-pause-951206" [533a1e65-747b-46ca-9b99-a0935428abf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:25:19.901476 2478398 system_pods.go:126] duration metric: took 4.121592ms to wait for k8s-apps to be running ...
	I1101 09:25:19.901497 2478398 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:25:19.901589 2478398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:25:19.947756 2478398 system_svc.go:56] duration metric: took 46.23651ms WaitForService to wait for kubelet
	I1101 09:25:19.947835 2478398 kubeadm.go:587] duration metric: took 7.528082282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:25:19.947904 2478398 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:25:19.954381 2478398 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:25:19.954463 2478398 node_conditions.go:123] node cpu capacity is 2
	I1101 09:25:19.954490 2478398 node_conditions.go:105] duration metric: took 6.56543ms to run NodePressure ...
	I1101 09:25:19.954516 2478398 start.go:242] waiting for startup goroutines ...
	I1101 09:25:19.954547 2478398 start.go:247] waiting for cluster config update ...
	I1101 09:25:19.954569 2478398 start.go:256] writing updated cluster config ...
	I1101 09:25:19.954954 2478398 ssh_runner.go:195] Run: rm -f paused
	I1101 09:25:19.960155 2478398 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:25:19.960827 2478398 kapi.go:59] client config for pause-951206: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key", CAFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:25:19.964727 2478398 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5vztm" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:25:21.977054 2478398 pod_ready.go:104] pod "coredns-66bc5c9577-5vztm" is not "Ready", error: <nil>
	I1101 09:25:21.461458 2478201 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:25:21.461916 2478201 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-778652 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:25:22.005636 2478201 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:25:22.745383 2478201 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:25:23.021622 2478201 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:25:23.021941 2478201 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:25:23.742848 2478201 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:25:24.022443 2478201 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:25:25.037147 2478201 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:25:25.361003 2478201 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:25:26.403235 2478201 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:25:26.404039 2478201 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:25:26.406613 2478201 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:25:26.410165 2478201 out.go:252]   - Booting up control plane ...
	I1101 09:25:26.410287 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:25:26.410380 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:25:26.410451 2478201 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	W1101 09:25:24.470985 2478398 pod_ready.go:104] pod "coredns-66bc5c9577-5vztm" is not "Ready", error: <nil>
	I1101 09:25:24.972306 2478398 pod_ready.go:94] pod "coredns-66bc5c9577-5vztm" is "Ready"
	I1101 09:25:24.972328 2478398 pod_ready.go:86] duration metric: took 5.00753326s for pod "coredns-66bc5c9577-5vztm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:24.975547 2478398 pod_ready.go:83] waiting for pod "etcd-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:25:26.981914 2478398 pod_ready.go:104] pod "etcd-pause-951206" is not "Ready", error: <nil>
	I1101 09:25:26.431943 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:25:26.432571 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:25:26.440993 2478201 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:25:26.441494 2478201 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:25:26.441754 2478201 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:25:26.567699 2478201 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:25:26.567826 2478201 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:25:28.072438 2478201 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501758088s
	I1101 09:25:28.073196 2478201 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:25:28.073405 2478201 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:25:28.073507 2478201 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:25:28.074015 2478201 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 09:25:29.480831 2478398 pod_ready.go:104] pod "etcd-pause-951206" is not "Ready", error: <nil>
	I1101 09:25:30.481001 2478398 pod_ready.go:94] pod "etcd-pause-951206" is "Ready"
	I1101 09:25:30.481072 2478398 pod_ready.go:86] duration metric: took 5.505505127s for pod "etcd-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.483764 2478398 pod_ready.go:83] waiting for pod "kube-apiserver-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.488042 2478398 pod_ready.go:94] pod "kube-apiserver-pause-951206" is "Ready"
	I1101 09:25:30.488111 2478398 pod_ready.go:86] duration metric: took 4.290695ms for pod "kube-apiserver-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:30.490383 2478398 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.496062 2478398 pod_ready.go:94] pod "kube-controller-manager-pause-951206" is "Ready"
	I1101 09:25:31.496089 2478398 pod_ready.go:86] duration metric: took 1.005641528s for pod "kube-controller-manager-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.498309 2478398 pod_ready.go:83] waiting for pod "kube-proxy-6ttp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.506488 2478398 pod_ready.go:94] pod "kube-proxy-6ttp4" is "Ready"
	I1101 09:25:31.506516 2478398 pod_ready.go:86] duration metric: took 8.182403ms for pod "kube-proxy-6ttp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:31.678426 2478398 pod_ready.go:83] waiting for pod "kube-scheduler-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:32.078673 2478398 pod_ready.go:94] pod "kube-scheduler-pause-951206" is "Ready"
	I1101 09:25:32.078698 2478398 pod_ready.go:86] duration metric: took 400.244497ms for pod "kube-scheduler-pause-951206" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:25:32.078710 2478398 pod_ready.go:40] duration metric: took 12.1184744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:25:32.181714 2478398 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:25:32.184832 2478398 out.go:179] * Done! kubectl is now configured to use "pause-951206" cluster and "default" namespace by default
	I1101 09:25:34.065663 2478201 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.991521003s
	I1101 09:25:34.095524 2478201 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.021298178s
	I1101 09:25:35.075367 2478201 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001847503s
	I1101 09:25:35.117105 2478201 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:25:35.146630 2478201 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:25:35.164811 2478201 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:25:35.165037 2478201 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-778652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:25:35.184678 2478201 kubeadm.go:319] [bootstrap-token] Using token: 5txf5d.psrfomc8ixrxkuec
	I1101 09:25:35.187577 2478201 out.go:252]   - Configuring RBAC rules ...
	I1101 09:25:35.187701 2478201 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:25:35.214377 2478201 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:25:35.231367 2478201 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:25:35.237653 2478201 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:25:35.245412 2478201 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:25:35.255213 2478201 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:25:35.483295 2478201 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:25:36.033207 2478201 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:25:36.486521 2478201 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:25:36.488988 2478201 kubeadm.go:319] 
	I1101 09:25:36.489065 2478201 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:25:36.489072 2478201 kubeadm.go:319] 
	I1101 09:25:36.489154 2478201 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:25:36.489158 2478201 kubeadm.go:319] 
	I1101 09:25:36.489185 2478201 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:25:36.489247 2478201 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:25:36.489300 2478201 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:25:36.489305 2478201 kubeadm.go:319] 
	I1101 09:25:36.489362 2478201 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:25:36.489367 2478201 kubeadm.go:319] 
	I1101 09:25:36.489423 2478201 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:25:36.489428 2478201 kubeadm.go:319] 
	I1101 09:25:36.489495 2478201 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:25:36.489575 2478201 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:25:36.489647 2478201 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:25:36.489651 2478201 kubeadm.go:319] 
	I1101 09:25:36.489745 2478201 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:25:36.489826 2478201 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:25:36.489831 2478201 kubeadm.go:319] 
	I1101 09:25:36.489920 2478201 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5txf5d.psrfomc8ixrxkuec \
	I1101 09:25:36.490029 2478201 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:25:36.490061 2478201 kubeadm.go:319] 	--control-plane 
	I1101 09:25:36.490068 2478201 kubeadm.go:319] 
	I1101 09:25:36.490157 2478201 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:25:36.490162 2478201 kubeadm.go:319] 
	I1101 09:25:36.490248 2478201 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5txf5d.psrfomc8ixrxkuec \
	I1101 09:25:36.490356 2478201 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:25:36.496581 2478201 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:25:36.496823 2478201 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:25:36.496947 2478201 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:25:36.496965 2478201 cni.go:84] Creating CNI manager for ""
	I1101 09:25:36.496972 2478201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:25:36.500144 2478201 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:25:36.502944 2478201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:25:36.507957 2478201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:25:36.507978 2478201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:25:36.527494 2478201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:25:36.924660 2478201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:25:36.924715 2478201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:25:36.924813 2478201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-778652 minikube.k8s.io/updated_at=2025_11_01T09_25_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=force-systemd-env-778652 minikube.k8s.io/primary=true
	I1101 09:25:37.281484 2478201 ops.go:34] apiserver oom_adj: -16
	I1101 09:25:37.281514 2478201 kubeadm.go:1114] duration metric: took 356.86335ms to wait for elevateKubeSystemPrivileges
	I1101 09:25:37.281614 2478201 kubeadm.go:403] duration metric: took 19.011223935s to StartCluster
	I1101 09:25:37.281636 2478201 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:37.281700 2478201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:25:37.282663 2478201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:25:37.282869 2478201 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:25:37.282969 2478201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:25:37.283318 2478201 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:25:37.283414 2478201 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-env-778652"
	I1101 09:25:37.283435 2478201 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-env-778652"
	I1101 09:25:37.283442 2478201 config.go:182] Loaded profile config "force-systemd-env-778652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:25:37.283458 2478201 host.go:66] Checking if "force-systemd-env-778652" exists ...
	I1101 09:25:37.283602 2478201 addons.go:70] Setting default-storageclass=true in profile "force-systemd-env-778652"
	I1101 09:25:37.283619 2478201 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-778652"
	I1101 09:25:37.284040 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:37.288619 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:37.288810 2478201 out.go:179] * Verifying Kubernetes components...
	I1101 09:25:37.298749 2478201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:25:37.341584 2478201 kapi.go:59] client config for force-systemd-env-778652: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/force-systemd-env-778652/client.key", CAFile:"/home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:25:37.342107 2478201 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:25:37.342118 2478201 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:25:37.342124 2478201 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:25:37.342128 2478201 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:25:37.342132 2478201 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:25:37.343885 2478201 addons.go:239] Setting addon default-storageclass=true in "force-systemd-env-778652"
	I1101 09:25:37.343923 2478201 host.go:66] Checking if "force-systemd-env-778652" exists ...
	I1101 09:25:37.344008 2478201 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1101 09:25:37.344399 2478201 cli_runner.go:164] Run: docker container inspect force-systemd-env-778652 --format={{.State.Status}}
	I1101 09:25:37.357587 2478201 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.636094223Z" level=info msg="Created container 15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd: kube-system/coredns-66bc5c9577-5vztm/coredns" id=709372da-2d9b-432f-945e-ff42d9440f49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.637790348Z" level=info msg="Starting container: 15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd" id=eb64c016-c59b-4616-b972-bc94071833c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.640266128Z" level=info msg="Started container" PID=2324 containerID=e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d description=kube-system/kube-controller-manager-pause-951206/kube-controller-manager id=c9204883-9398-43a0-9e55-1c3ca64fd0a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ea6f902d3547475bdf99d567eb4f46e408cb0619bdd0817724bc295a3f9290d
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.653207824Z" level=info msg="Started container" PID=2359 containerID=15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd description=kube-system/coredns-66bc5c9577-5vztm/coredns id=eb64c016-c59b-4616-b972-bc94071833c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae16d8e0af76ccb45d54cbd891aed89ebe76a7a7c923c53e3516939805ed1fe0
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.72600847Z" level=info msg="Created container ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676: kube-system/kindnet-q9r8f/kindnet-cni" id=0ff178fe-0a72-41cd-9195-7d1e28ed5ca6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.728059113Z" level=info msg="Starting container: ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676" id=549b1249-36b8-40b9-8635-331576faa61e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:12 pause-951206 crio[2059]: time="2025-11-01T09:25:12.729791561Z" level=info msg="Started container" PID=2370 containerID=ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676 description=kube-system/kindnet-q9r8f/kindnet-cni id=549b1249-36b8-40b9-8635-331576faa61e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f54cb18308f125fb7d1e079dda1b48e046811714b2b4fcf77a8f5d7c3586bc5
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.583003884Z" level=info msg="Created container 72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27: kube-system/kube-proxy-6ttp4/kube-proxy" id=4af0f039-6c33-4fec-aaf6-5f3d6e316d5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.588581674Z" level=info msg="Starting container: 72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27" id=1613ae5c-2ab4-4c83-8bae-8e93ceb4b829 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:25:13 pause-951206 crio[2059]: time="2025-11-01T09:25:13.591293886Z" level=info msg="Started container" PID=2358 containerID=72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27 description=kube-system/kube-proxy-6ttp4/kube-proxy id=1613ae5c-2ab4-4c83-8bae-8e93ceb4b829 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49fc64fa1c2272a5b1191914ddb190d5f6bf9005f62e075b7faa9a43183a9d13
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.055786109Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.06076737Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.060987015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.061061598Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.068185535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.068345424Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.0684178Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.084174351Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.08440065Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.08449897Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092157834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092314671Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.092388096Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.105405228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:25:23 pause-951206 crio[2059]: time="2025-11-01T09:25:23.105564961Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ea40b56f7de32       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               1                   9f54cb18308f1       kindnet-q9r8f                          kube-system
	15815aa5b1fd6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   ae16d8e0af76c       coredns-66bc5c9577-5vztm               kube-system
	72980c4c54ed2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago       Running             kube-proxy                1                   49fc64fa1c227       kube-proxy-6ttp4                       kube-system
	488f329d3b067       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago       Running             kube-scheduler            1                   c0b1e2bdfa664       kube-scheduler-pause-951206            kube-system
	e0e2262ea0f4e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago       Running             kube-controller-manager   1                   0ea6f902d3547       kube-controller-manager-pause-951206   kube-system
	1e267b46f1d9e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago       Running             kube-apiserver            1                   fa74cf5479df0       kube-apiserver-pause-951206            kube-system
	151ac9fa21126       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago       Running             etcd                      1                   4f21e872f3d55       etcd-pause-951206                      kube-system
	8c588c62b138b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago       Exited              coredns                   0                   ae16d8e0af76c       coredns-66bc5c9577-5vztm               kube-system
	b504ec0758f08       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9f54cb18308f1       kindnet-q9r8f                          kube-system
	ae4144162ff99       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   49fc64fa1c227       kube-proxy-6ttp4                       kube-system
	0cdb4df035719       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   fa74cf5479df0       kube-apiserver-pause-951206            kube-system
	c288732659394       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4f21e872f3d55       etcd-pause-951206                      kube-system
	50e4c19be115c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0ea6f902d3547       kube-controller-manager-pause-951206   kube-system
	c6f012ce8b285       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   c0b1e2bdfa664       kube-scheduler-pause-951206            kube-system
	
	
	==> coredns [15815aa5b1fd6da5af7f27e0905719d302ad2a07087e13caf2ccc3b4c889d5fd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52224 - 56883 "HINFO IN 2270440970023884154.8025078414390256880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0564672s
	
	
	==> coredns [8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36078 - 24232 "HINFO IN 7947653426953632850.3437096107154526178. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012576989s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-951206
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-951206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-951206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_24_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:24:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-951206
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:25:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:24:59 +0000   Sat, 01 Nov 2025 09:24:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-951206
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1b53ea08-0f20-424e-bf47-e4d9e80e497e
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5vztm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     81s
	  kube-system                 etcd-pause-951206                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-q9r8f                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      81s
	  kube-system                 kube-apiserver-pause-951206             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-951206    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-6ttp4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-pause-951206             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 80s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node pause-951206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node pause-951206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s (x8 over 99s)  kubelet          Node pause-951206 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  86s                kubelet          Node pause-951206 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s                kubelet          Node pause-951206 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s                kubelet          Node pause-951206 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                node-controller  Node pause-951206 event: Registered Node pause-951206 in Controller
	  Normal   NodeReady                40s                kubelet          Node pause-951206 status is now: NodeReady
	  Normal   RegisteredNode           18s                node-controller  Node pause-951206 event: Registered Node pause-951206 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:00] overlayfs: idmapped layers are currently not supported
	[  +4.169917] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:01] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [151ac9fa211263a6da9c02d44bf9df8af1a169e8ad976bb46ffd74c1cc8a3b89] <==
	{"level":"warn","ts":"2025-11-01T09:25:15.957027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:15.986027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.048184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.082607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.100127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.116808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.159712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.198231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.211533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.234290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.304753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.342040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.358224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.388784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.403497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.420315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.452160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.475990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.495781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.539297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.570624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.594718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.628443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.656230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:25:16.793764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	
	
	==> etcd [c2887326593949338658d54bb176a3f92ca0ce4d7619db8e75d1d9e2fd3c297c] <==
	{"level":"warn","ts":"2025-11-01T09:24:07.633816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.647932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.682986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.794837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.808486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.818022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:24:07.933147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:25:04.203165Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:25:04.203232Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-951206","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:25:04.203326Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:25:04.203369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878758Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:25:06.878749Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-01T09:25:06.878768Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:25:06.878784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T09:25:06.878782Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-11-01T09:25:06.878791Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:25:06.878804Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T09:25:06.878851Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:25:06.884592Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-01T09:25:06.884729Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:25:06.884791Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:25:06.884840Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-951206","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 09:25:39 up 18:08,  0 user,  load average: 4.40, 3.78, 2.78
	Linux pause-951206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b504ec0758f08784fd92576699693b07ab12f61917f04b0c3f9548f87aa4e834] <==
	I1101 09:24:18.755471       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:24:18.755808       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:24:18.756017       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:24:18.756062       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:24:18.756099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:24:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:24:18.965840       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:24:18.965935       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:24:18.965968       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:24:18.966954       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:24:48.966043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:24:48.967064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:24:48.967236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:24:48.967317       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:24:50.466936       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:24:50.467043       1 metrics.go:72] Registering metrics
	I1101 09:24:50.467144       1 controller.go:711] "Syncing nftables rules"
	I1101 09:24:58.971940       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:24:58.971979       1 main.go:301] handling current node
	
	
	==> kindnet [ea40b56f7de32b3254ff839c8bd72fe33fe815b4ec8f7ab3ba120646dffc2676] <==
	I1101 09:25:12.871129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:25:12.884161       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:25:12.884408       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:25:12.884459       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:25:12.884498       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:25:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:25:13.055473       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:25:13.055500       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:25:13.055512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:25:13.056680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:25:18.561269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:25:18.561384       1 metrics.go:72] Registering metrics
	I1101 09:25:18.561498       1 controller.go:711] "Syncing nftables rules"
	I1101 09:25:23.055260       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:25:23.055437       1 main.go:301] handling current node
	I1101 09:25:33.055557       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:25:33.055602       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cdb4df035719000dd91d22036208e0ef5b5c165830c9fe2e474beebd7fa8f3d] <==
	W1101 09:25:05.265100       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265136       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265175       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265211       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265257       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265273       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265298       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265319       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265338       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265362       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:05.265383       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.494465       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.529372       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.573193       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.573193       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.579652       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.580887       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.585254       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.599951       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.602295       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.635264       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.645807       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.648264       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.670190       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:25:06.674792       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [1e267b46f1d9e93b1931162cf060aaf69731ece8d757ffde3a2582cfd7651ffb] <==
	I1101 09:25:18.466266       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:25:18.508206       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:25:18.520733       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:25:18.534145       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:25:18.534266       1 policy_source.go:240] refreshing policies
	I1101 09:25:18.537182       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:25:18.543662       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:25:18.550501       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:25:18.559591       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:25:18.559633       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:25:18.577239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:25:18.602926       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:25:18.603234       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:25:18.632412       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:25:18.632509       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:25:18.632554       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:25:18.632599       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:25:18.634246       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:25:18.668815       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:25:18.862955       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:25:20.382319       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:25:21.832643       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:25:21.869137       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:25:22.015659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:25:22.066356       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [50e4c19be115cbd5a27bca874f1801f8fb8f2ae6a82fb47904af1031ae88e97a] <==
	I1101 09:24:17.300547       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:24:17.301756       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:24:17.301866       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-951206"
	I1101 09:24:17.301937       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:24:17.302375       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:24:17.313036       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-951206" podCIDRs=["10.244.0.0/24"]
	I1101 09:24:17.313663       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:24:17.318790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:24:17.325035       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:24:17.329594       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:24:17.335622       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:24:17.335720       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:24:17.335728       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:24:17.335747       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:24:17.337416       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:24:17.337481       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:24:17.343720       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:24:17.343776       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:24:17.347891       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:24:17.347954       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:24:17.347971       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:24:17.348346       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:24:17.366334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:24:17.380046       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:25:02.309239       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e0e2262ea0f4e166f11273995407222648770de6a2fb43aaafc290e160ee6f6d] <==
	I1101 09:25:21.751322       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:25:21.751759       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:25:21.753635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:25:21.753733       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:25:21.753745       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:25:21.753762       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:25:21.757232       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:25:21.758871       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:25:21.758954       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:25:21.759843       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:25:21.760083       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:25:21.760338       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:25:21.760419       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:25:21.763611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:25:21.763730       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:25:21.772562       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:25:21.772929       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:25:21.773115       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:25:21.773299       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-951206"
	I1101 09:25:21.773386       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:25:21.801361       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:25:21.805558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:25:21.805649       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:25:21.805684       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:25:21.802387       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	
	
	==> kube-proxy [72980c4c54ed2cf65c1907185972beba807212bf90a9e38c9bef2ba5a40a4f27] <==
	I1101 09:25:14.160083       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:25:15.962005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:25:18.696467       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:25:18.711888       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:25:18.736719       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:25:20.737407       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:25:20.737467       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:25:21.011930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:25:21.021965       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:25:21.079933       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:25:21.081343       1 config.go:200] "Starting service config controller"
	I1101 09:25:21.139917       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:25:21.140004       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:25:21.140016       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:25:21.140031       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:25:21.140041       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:25:21.140782       1 config.go:309] "Starting node config controller"
	I1101 09:25:21.140801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:25:21.140808       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:25:21.270087       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:25:21.275914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:25:21.348160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ae4144162ff99a96713f4b79715f1b459b8757fe18777f4b87377958ea076cd5] <==
	I1101 09:24:18.772635       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:24:18.851524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:24:18.953838       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:24:18.953875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:24:18.953968       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:24:19.013267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:24:19.013325       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:24:19.026266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:24:19.026603       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:24:19.026620       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:24:19.027655       1 config.go:200] "Starting service config controller"
	I1101 09:24:19.027665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:24:19.030209       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:24:19.030237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:24:19.030280       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:24:19.030285       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:24:19.030922       1 config.go:309] "Starting node config controller"
	I1101 09:24:19.030930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:24:19.030935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:24:19.128173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:24:19.130767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:24:19.130802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [488f329d3b067e2efa5f2037a5cad268b4ba316f7b54011aaa9c30ec6aee51cc] <==
	I1101 09:25:18.377839       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:25:22.598534       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:25:22.598567       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:25:22.604211       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:25:22.604316       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:25:22.604332       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:25:22.604358       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:25:22.620289       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:22.620315       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:22.620338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.620351       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.704428       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:25:22.721384       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:22.721463       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c6f012ce8b285f2c250e9e5f1e148dce648fbf07bd9c33e499baf85c396f37d8] <==
	I1101 09:24:08.947964       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:24:11.708728       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:24:11.708763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:24:11.713924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:24:11.714001       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:24:11.714027       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:24:11.714056       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:24:11.741274       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:24:11.741308       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:24:11.741520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.741534       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.814684       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:24:11.842063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:24:11.842135       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:04.222808       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:25:04.222894       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:25:04.225481       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:25:04.227644       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:25:04.227729       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:25:04.227777       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1101 09:25:04.232920       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:25:04.233001       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:25:12 pause-951206 kubelet[1296]: I1101 09:25:12.442139    1296 scope.go:117] "RemoveContainer" containerID="8c588c62b138bc6cc1aaeae9bc15a83731cfe0ee7bdd104f8d28c7b0b80aee31"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442565    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5vztm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39f1f2f9-c206-4cc0-a799-cb547db90061" pod="kube-system/coredns-66bc5c9577-5vztm"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442719    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f4a72daa8226daa8a8c8bca246dc7993" pod="kube-system/kube-controller-manager-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442861    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22850738db4b57fe25790f0b6f614526" pod="kube-system/etcd-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.442997    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6977649d9e2ac633ed5a82c18c6cd213" pod="kube-system/kube-apiserver-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443128    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-951206\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0b4a3cb8eda2a2cbd443a0370aa2b9cd" pod="kube-system/kube-scheduler-pause-951206"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443262    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ttp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3" pod="kube-system/kube-proxy-6ttp4"
	Nov 01 09:25:12 pause-951206 kubelet[1296]: E1101 09:25:12.443391    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-q9r8f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8791b18b-4128-40c1-961b-b9eb8bb798e0" pod="kube-system/kindnet-q9r8f"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.444736    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="f4a72daa8226daa8a8c8bca246dc7993" pod="kube-system/kube-controller-manager-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448281    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448354    1296 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.448370    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-951206\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.465932    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="22850738db4b57fe25790f0b6f614526" pod="kube-system/etcd-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.494142    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="6977649d9e2ac633ed5a82c18c6cd213" pod="kube-system/kube-apiserver-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.496256    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-951206\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="0b4a3cb8eda2a2cbd443a0370aa2b9cd" pod="kube-system/kube-scheduler-pause-951206"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.504960    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-6ttp4\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="90e67eaf-ffaf-43f7-bf24-0fa6509c4ed3" pod="kube-system/kube-proxy-6ttp4"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.512624    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-q9r8f\" is forbidden: User \"system:node:pause-951206\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-951206' and this object" podUID="8791b18b-4128-40c1-961b-b9eb8bb798e0" pod="kube-system/kindnet-q9r8f"
	Nov 01 09:25:18 pause-951206 kubelet[1296]: E1101 09:25:18.521044    1296 status_manager.go:1018] "Failed to get status for pod" err=<
	Nov 01 09:25:18 pause-951206 kubelet[1296]:         pods "coredns-66bc5c9577-5vztm" is forbidden: User "system:node:pause-951206" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-951206' and this object
	Nov 01 09:25:18 pause-951206 kubelet[1296]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Nov 01 09:25:18 pause-951206 kubelet[1296]:  > podUID="39f1f2f9-c206-4cc0-a799-cb547db90061" pod="kube-system/coredns-66bc5c9577-5vztm"
	Nov 01 09:25:23 pause-951206 kubelet[1296]: W1101 09:25:23.284925    1296 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 01 09:25:32 pause-951206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:25:32 pause-951206 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:25:32 pause-951206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-951206 -n pause-951206
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-951206 -n pause-951206: exit status 2 (364.659876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-951206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (284.557976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-068218 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-068218 describe deploy/metrics-server -n kube-system: exit status 1 (83.209924ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-068218 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-068218
helpers_test.go:243: (dbg) docker inspect old-k8s-version-068218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	        "Created": "2025-11-01T09:26:34.668923657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2488369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:26:34.731601433Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hosts",
	        "LogPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4-json.log",
	        "Name": "/old-k8s-version-068218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-068218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-068218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	                "LowerDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-068218",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-068218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-068218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ef045b743bfd2cb8b6d3b5e0f30fac981232d6194d1fd1a474f7b02f0a9c21d",
	            "SandboxKey": "/var/run/docker/netns/5ef045b743bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36335"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36339"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36337"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36338"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-068218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:a1:26:41:66:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e195285262e64d3b782d7abf538ceec14d34fc8c1e31d12d18b21428d3b9ea34",
	                    "EndpointID": "d30fed8c99f6bad98b6e3ca968b174afcb9138bc2e5faed57ea90acda8e14b6f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-068218",
	                        "e88ec4f29f18"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25: (1.226777011s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-206273 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo containerd config dump                                                                                                                                                                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo crio config                                                                                                                                                                                                             │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p cilium-206273                                                                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p force-systemd-env-778652                                                                                                                                                                                                                   │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ delete  │ -p pause-951206                                                                                                                                                                                                                               │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ cert-options-578478 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:26:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:26:28.718210 2487790 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:26:28.718541 2487790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:28.718587 2487790 out.go:374] Setting ErrFile to fd 2...
	I1101 09:26:28.718609 2487790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:28.718923 2487790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:26:28.719385 2487790 out.go:368] Setting JSON to false
	I1101 09:26:28.720332 2487790 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65335,"bootTime":1761923854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:26:28.720429 2487790 start.go:143] virtualization:  
	I1101 09:26:28.724141 2487790 out.go:179] * [old-k8s-version-068218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:26:28.728698 2487790 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:26:28.728937 2487790 notify.go:221] Checking for updates...
	I1101 09:26:28.735652 2487790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:26:28.738898 2487790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:26:28.742085 2487790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:26:28.745292 2487790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:26:28.748421 2487790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:26:28.752064 2487790 config.go:182] Loaded profile config "cert-expiration-218273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:26:28.752178 2487790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:26:28.777296 2487790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:26:28.777484 2487790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:26:28.852375 2487790 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:26:28.841076208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:26:28.852498 2487790 docker.go:319] overlay module found
	I1101 09:26:28.856757 2487790 out.go:179] * Using the docker driver based on user configuration
	I1101 09:26:28.860099 2487790 start.go:309] selected driver: docker
	I1101 09:26:28.860121 2487790 start.go:930] validating driver "docker" against <nil>
	I1101 09:26:28.860149 2487790 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:26:28.860886 2487790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:26:28.945674 2487790 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:26:28.935829156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:26:28.945826 2487790 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:26:28.946045 2487790 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:26:28.949555 2487790 out.go:179] * Using Docker driver with root privileges
	I1101 09:26:28.953342 2487790 cni.go:84] Creating CNI manager for ""
	I1101 09:26:28.953412 2487790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:26:28.953428 2487790 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:26:28.953521 2487790 start.go:353] cluster config:
	{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:26:28.957246 2487790 out.go:179] * Starting "old-k8s-version-068218" primary control-plane node in "old-k8s-version-068218" cluster
	I1101 09:26:28.960110 2487790 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:26:28.962456 2487790 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:26:28.965028 2487790 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:26:28.965087 2487790 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:26:28.965101 2487790 cache.go:59] Caching tarball of preloaded images
	I1101 09:26:28.965185 2487790 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:26:28.965194 2487790 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:26:28.965301 2487790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:26:28.965318 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json: {Name:mk9e126397bbdb6af7a3bab65d958de350038942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:28.965467 2487790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:26:28.992754 2487790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:26:28.992779 2487790 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:26:28.992793 2487790 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:26:28.992828 2487790 start.go:360] acquireMachinesLock for old-k8s-version-068218: {Name:mkfc282fcc0d94abffeef2a346c8ebfcf87a3759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:26:28.992928 2487790 start.go:364] duration metric: took 80.26µs to acquireMachinesLock for "old-k8s-version-068218"
	I1101 09:26:28.992959 2487790 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:26:28.993032 2487790 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:26:28.996691 2487790 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:26:28.996929 2487790 start.go:159] libmachine.API.Create for "old-k8s-version-068218" (driver="docker")
	I1101 09:26:28.996963 2487790 client.go:173] LocalClient.Create starting
	I1101 09:26:28.997034 2487790 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:26:28.997074 2487790 main.go:143] libmachine: Decoding PEM data...
	I1101 09:26:28.997090 2487790 main.go:143] libmachine: Parsing certificate...
	I1101 09:26:28.997149 2487790 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:26:28.997171 2487790 main.go:143] libmachine: Decoding PEM data...
	I1101 09:26:28.997184 2487790 main.go:143] libmachine: Parsing certificate...
	I1101 09:26:28.997541 2487790 cli_runner.go:164] Run: docker network inspect old-k8s-version-068218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:26:29.012979 2487790 cli_runner.go:211] docker network inspect old-k8s-version-068218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:26:29.013052 2487790 network_create.go:284] running [docker network inspect old-k8s-version-068218] to gather additional debugging logs...
	I1101 09:26:29.013066 2487790 cli_runner.go:164] Run: docker network inspect old-k8s-version-068218
	W1101 09:26:29.031737 2487790 cli_runner.go:211] docker network inspect old-k8s-version-068218 returned with exit code 1
	I1101 09:26:29.031770 2487790 network_create.go:287] error running [docker network inspect old-k8s-version-068218]: docker network inspect old-k8s-version-068218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-068218 not found
	I1101 09:26:29.031782 2487790 network_create.go:289] output of [docker network inspect old-k8s-version-068218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-068218 not found
	
	** /stderr **
	I1101 09:26:29.031921 2487790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:26:29.048697 2487790 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:26:29.049030 2487790 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:26:29.049358 2487790 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:26:29.049594 2487790 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-82568661a744 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8a:bb:0c:06:d9:b5} reservation:<nil>}
	I1101 09:26:29.049997 2487790 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5180}
	I1101 09:26:29.050014 2487790 network_create.go:124] attempt to create docker network old-k8s-version-068218 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 09:26:29.050068 2487790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-068218 old-k8s-version-068218
	I1101 09:26:29.132783 2487790 network_create.go:108] docker network old-k8s-version-068218 192.168.85.0/24 created
	I1101 09:26:29.132810 2487790 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-068218" container
	I1101 09:26:29.132881 2487790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:26:29.154198 2487790 cli_runner.go:164] Run: docker volume create old-k8s-version-068218 --label name.minikube.sigs.k8s.io=old-k8s-version-068218 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:26:29.184591 2487790 oci.go:103] Successfully created a docker volume old-k8s-version-068218
	I1101 09:26:29.184678 2487790 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-068218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-068218 --entrypoint /usr/bin/test -v old-k8s-version-068218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:26:29.750494 2487790 oci.go:107] Successfully prepared a docker volume old-k8s-version-068218
	I1101 09:26:29.750545 2487790 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:26:29.750565 2487790 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:26:29.750645 2487790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-068218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:26:34.590047 2487790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-068218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.839354847s)
	I1101 09:26:34.590082 2487790 kic.go:203] duration metric: took 4.839513915s to extract preloaded images to volume ...
	W1101 09:26:34.590230 2487790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:26:34.590340 2487790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:26:34.652813 2487790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-068218 --name old-k8s-version-068218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-068218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-068218 --network old-k8s-version-068218 --ip 192.168.85.2 --volume old-k8s-version-068218:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:26:34.955348 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Running}}
	I1101 09:26:34.978061 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:26:35.001975 2487790 cli_runner.go:164] Run: docker exec old-k8s-version-068218 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:26:35.056406 2487790 oci.go:144] the created container "old-k8s-version-068218" has a running status.
	I1101 09:26:35.056503 2487790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa...
	I1101 09:26:35.237637 2487790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:26:35.285891 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:26:35.305901 2487790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:26:35.305924 2487790 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-068218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:26:35.363424 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:26:35.387347 2487790 machine.go:94] provisionDockerMachine start ...
	I1101 09:26:35.387559 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:35.419208 2487790 main.go:143] libmachine: Using SSH client type: native
	I1101 09:26:35.419601 2487790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36335 <nil> <nil>}
	I1101 09:26:35.419614 2487790 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:26:35.420346 2487790 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44030->127.0.0.1:36335: read: connection reset by peer
	I1101 09:26:38.571791 2487790 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:26:38.571814 2487790 ubuntu.go:182] provisioning hostname "old-k8s-version-068218"
	I1101 09:26:38.571925 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:38.591818 2487790 main.go:143] libmachine: Using SSH client type: native
	I1101 09:26:38.592201 2487790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36335 <nil> <nil>}
	I1101 09:26:38.592225 2487790 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-068218 && echo "old-k8s-version-068218" | sudo tee /etc/hostname
	I1101 09:26:38.752293 2487790 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:26:38.752408 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:38.770619 2487790 main.go:143] libmachine: Using SSH client type: native
	I1101 09:26:38.770927 2487790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36335 <nil> <nil>}
	I1101 09:26:38.770943 2487790 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-068218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-068218/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-068218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:26:38.923819 2487790 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:26:38.923870 2487790 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:26:38.923900 2487790 ubuntu.go:190] setting up certificates
	I1101 09:26:38.923911 2487790 provision.go:84] configureAuth start
	I1101 09:26:38.923971 2487790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:26:38.946437 2487790 provision.go:143] copyHostCerts
	I1101 09:26:38.946509 2487790 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:26:38.946523 2487790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:26:38.946660 2487790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:26:38.946779 2487790 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:26:38.946793 2487790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:26:38.946824 2487790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:26:38.946883 2487790 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:26:38.946897 2487790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:26:38.946922 2487790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:26:38.946972 2487790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-068218 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-068218]
	I1101 09:26:39.169287 2487790 provision.go:177] copyRemoteCerts
	I1101 09:26:39.169378 2487790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:26:39.169445 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.187308 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:26:39.291382 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:26:39.307565 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:26:39.324028 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:26:39.341155 2487790 provision.go:87] duration metric: took 417.222896ms to configureAuth
	I1101 09:26:39.341184 2487790 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:26:39.341365 2487790 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:26:39.341478 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.358378 2487790 main.go:143] libmachine: Using SSH client type: native
	I1101 09:26:39.358702 2487790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36335 <nil> <nil>}
	I1101 09:26:39.358720 2487790 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:26:39.626687 2487790 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:26:39.626711 2487790 machine.go:97] duration metric: took 4.239346016s to provisionDockerMachine
	I1101 09:26:39.626720 2487790 client.go:176] duration metric: took 10.6297474s to LocalClient.Create
	I1101 09:26:39.626739 2487790 start.go:167] duration metric: took 10.629811841s to libmachine.API.Create "old-k8s-version-068218"
	I1101 09:26:39.626755 2487790 start.go:293] postStartSetup for "old-k8s-version-068218" (driver="docker")
	I1101 09:26:39.626764 2487790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:26:39.626831 2487790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:26:39.626876 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.644769 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:26:39.751840 2487790 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:26:39.755236 2487790 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:26:39.755267 2487790 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:26:39.755277 2487790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:26:39.755328 2487790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:26:39.755413 2487790 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:26:39.755511 2487790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:26:39.762977 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:26:39.780557 2487790 start.go:296] duration metric: took 153.787148ms for postStartSetup
	I1101 09:26:39.780927 2487790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:26:39.797614 2487790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:26:39.797897 2487790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:26:39.797962 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.815646 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:26:39.916637 2487790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:26:39.921028 2487790 start.go:128] duration metric: took 10.927981828s to createHost
	I1101 09:26:39.921049 2487790 start.go:83] releasing machines lock for "old-k8s-version-068218", held for 10.928107593s
	I1101 09:26:39.921115 2487790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:26:39.937046 2487790 ssh_runner.go:195] Run: cat /version.json
	I1101 09:26:39.937105 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.937170 2487790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:26:39.937231 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:26:39.957067 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:26:39.958361 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:26:40.152365 2487790 ssh_runner.go:195] Run: systemctl --version
	I1101 09:26:40.159369 2487790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:26:40.206752 2487790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:26:40.211333 2487790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:26:40.211466 2487790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:26:40.247667 2487790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:26:40.247693 2487790 start.go:496] detecting cgroup driver to use...
	I1101 09:26:40.247726 2487790 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:26:40.247774 2487790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:26:40.265432 2487790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:26:40.278549 2487790 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:26:40.278617 2487790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:26:40.296195 2487790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:26:40.315606 2487790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:26:40.442882 2487790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:26:40.567237 2487790 docker.go:234] disabling docker service ...
	I1101 09:26:40.567341 2487790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:26:40.588784 2487790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:26:40.602394 2487790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:26:40.714295 2487790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:26:40.830943 2487790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:26:40.846835 2487790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:26:40.861953 2487790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:26:40.862017 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.870645 2487790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:26:40.870743 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.879466 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.887958 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.896335 2487790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:26:40.904777 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.913438 2487790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.926660 2487790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:26:40.937674 2487790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:26:40.946226 2487790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:26:40.953762 2487790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:26:41.080110 2487790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:26:41.215365 2487790 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:26:41.215435 2487790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:26:41.219389 2487790 start.go:564] Will wait 60s for crictl version
	I1101 09:26:41.219453 2487790 ssh_runner.go:195] Run: which crictl
	I1101 09:26:41.222930 2487790 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:26:41.247898 2487790 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:26:41.247999 2487790 ssh_runner.go:195] Run: crio --version
	I1101 09:26:41.276515 2487790 ssh_runner.go:195] Run: crio --version
	I1101 09:26:41.313829 2487790 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 09:26:41.316676 2487790 cli_runner.go:164] Run: docker network inspect old-k8s-version-068218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:26:41.333386 2487790 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:26:41.338134 2487790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:26:41.351028 2487790 kubeadm.go:884] updating cluster {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:26:41.351163 2487790 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:26:41.351252 2487790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:26:41.389211 2487790 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:26:41.389233 2487790 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:26:41.389297 2487790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:26:41.416415 2487790 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:26:41.416444 2487790 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:26:41.416453 2487790 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 09:26:41.416624 2487790 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-068218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:26:41.416759 2487790 ssh_runner.go:195] Run: crio config
	I1101 09:26:41.495828 2487790 cni.go:84] Creating CNI manager for ""
	I1101 09:26:41.495918 2487790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:26:41.495952 2487790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:26:41.495985 2487790 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-068218 NodeName:old-k8s-version-068218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:26:41.496129 2487790 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-068218"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:26:41.496206 2487790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:26:41.504116 2487790 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:26:41.504189 2487790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:26:41.512308 2487790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 09:26:41.527282 2487790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:26:41.540921 2487790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 09:26:41.555142 2487790 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:26:41.559983 2487790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:26:41.570157 2487790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:26:41.681976 2487790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:26:41.697072 2487790 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218 for IP: 192.168.85.2
	I1101 09:26:41.697092 2487790 certs.go:195] generating shared ca certs ...
	I1101 09:26:41.697107 2487790 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:41.697251 2487790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:26:41.697305 2487790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:26:41.697318 2487790 certs.go:257] generating profile certs ...
	I1101 09:26:41.697377 2487790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.key
	I1101 09:26:41.697398 2487790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt with IP's: []
	I1101 09:26:43.236322 2487790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt ...
	I1101 09:26:43.236361 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: {Name:mk62058372eeeeda73f20eafea027b2bf0e6df40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.236625 2487790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.key ...
	I1101 09:26:43.236644 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.key: {Name:mkacbebd6c5f710aefa044e47aa74ff9992de19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.236753 2487790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c
	I1101 09:26:43.236768 2487790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt.85e8465c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 09:26:43.326684 2487790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt.85e8465c ...
	I1101 09:26:43.326717 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt.85e8465c: {Name:mkb837c5e069045dd95ba1ea2f2cd2f060eee40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.326903 2487790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c ...
	I1101 09:26:43.326919 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c: {Name:mk317489c8a5d811c42b2f49b2127de430363296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.327006 2487790 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt.85e8465c -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt
	I1101 09:26:43.327088 2487790 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key
	I1101 09:26:43.327148 2487790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key
	I1101 09:26:43.327166 2487790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt with IP's: []
	I1101 09:26:43.513433 2487790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt ...
	I1101 09:26:43.513463 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt: {Name:mk23a8b36ebb233ab9472e24f6d44d6fdd3d887f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.513648 2487790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key ...
	I1101 09:26:43.513662 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key: {Name:mk8c5bda84688c35505729d528a8838f8efefe4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:43.513895 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:26:43.513942 2487790 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:26:43.513955 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:26:43.513992 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:26:43.514030 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:26:43.514070 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:26:43.514121 2487790 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:26:43.514832 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:26:43.533881 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:26:43.554381 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:26:43.574404 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:26:43.594122 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:26:43.612256 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:26:43.630055 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:26:43.647265 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:26:43.667583 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:26:43.685304 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:26:43.702915 2487790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:26:43.721155 2487790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:26:43.734948 2487790 ssh_runner.go:195] Run: openssl version
	I1101 09:26:43.741085 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:26:43.749472 2487790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:26:43.753293 2487790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:26:43.753380 2487790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:26:43.794240 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:26:43.802847 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:26:43.810751 2487790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:26:43.814375 2487790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:26:43.814455 2487790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:26:43.855313 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:26:43.863177 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:26:43.870925 2487790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:26:43.874951 2487790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:26:43.875022 2487790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:26:43.920860 2487790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:26:43.929001 2487790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:26:43.932402 2487790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:26:43.932455 2487790 kubeadm.go:401] StartCluster: {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:26:43.932529 2487790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:26:43.932606 2487790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:26:43.958340 2487790 cri.go:89] found id: ""
	I1101 09:26:43.958419 2487790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:26:43.966191 2487790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:26:43.973902 2487790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:26:43.973962 2487790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:26:43.982005 2487790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:26:43.982023 2487790 kubeadm.go:158] found existing configuration files:
	
	I1101 09:26:43.982111 2487790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:26:43.989696 2487790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:26:43.989763 2487790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:26:43.997416 2487790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:26:44.006865 2487790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:26:44.006956 2487790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:26:44.016283 2487790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:26:44.026465 2487790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:26:44.026534 2487790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:26:44.035341 2487790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:26:44.044848 2487790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:26:44.044923 2487790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:26:44.053571 2487790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:26:44.110712 2487790 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 09:26:44.110982 2487790 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:26:44.151339 2487790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:26:44.151426 2487790 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:26:44.151470 2487790 kubeadm.go:319] OS: Linux
	I1101 09:26:44.151531 2487790 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:26:44.151593 2487790 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:26:44.151653 2487790 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:26:44.151715 2487790 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:26:44.151769 2487790 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:26:44.151830 2487790 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:26:44.151915 2487790 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:26:44.151985 2487790 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:26:44.152046 2487790 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:26:44.236978 2487790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:26:44.237121 2487790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:26:44.237261 2487790 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 09:26:44.388645 2487790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:26:44.392217 2487790 out.go:252]   - Generating certificates and keys ...
	I1101 09:26:44.392389 2487790 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:26:44.392576 2487790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:26:45.565740 2487790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:26:46.112945 2487790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:26:46.346308 2487790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:26:47.262095 2487790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:26:47.734059 2487790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:26:47.734217 2487790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-068218] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:26:48.069356 2487790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:26:48.073926 2487790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-068218] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:26:48.371170 2487790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:26:48.565657 2487790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:26:49.075138 2487790 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:26:49.075422 2487790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:26:49.384600 2487790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:26:49.704610 2487790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:26:50.374164 2487790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:26:50.564585 2487790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:26:50.565280 2487790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:26:50.567965 2487790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:26:50.571546 2487790 out.go:252]   - Booting up control plane ...
	I1101 09:26:50.571661 2487790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:26:50.571743 2487790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:26:50.571813 2487790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:26:50.586782 2487790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:26:50.587580 2487790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:26:50.587844 2487790 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:26:50.713884 2487790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 09:26:58.719625 2487790 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.008661 seconds
	I1101 09:26:58.719760 2487790 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:26:58.736652 2487790 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:26:59.265093 2487790 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:26:59.265484 2487790 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-068218 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:26:59.780087 2487790 kubeadm.go:319] [bootstrap-token] Using token: ptao3u.2au6w3w3bslvp3bz
	I1101 09:26:59.782966 2487790 out.go:252]   - Configuring RBAC rules ...
	I1101 09:26:59.783103 2487790 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:26:59.787285 2487790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:26:59.795913 2487790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:26:59.800131 2487790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:26:59.806846 2487790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:26:59.810976 2487790 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:26:59.838865 2487790 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:27:00.345149 2487790 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:27:00.484196 2487790 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:27:00.486516 2487790 kubeadm.go:319] 
	I1101 09:27:00.486598 2487790 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:27:00.486605 2487790 kubeadm.go:319] 
	I1101 09:27:00.486686 2487790 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:27:00.486691 2487790 kubeadm.go:319] 
	I1101 09:27:00.486718 2487790 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:27:00.487259 2487790 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:27:00.487320 2487790 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:27:00.487325 2487790 kubeadm.go:319] 
	I1101 09:27:00.487381 2487790 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:27:00.487386 2487790 kubeadm.go:319] 
	I1101 09:27:00.487435 2487790 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:27:00.487440 2487790 kubeadm.go:319] 
	I1101 09:27:00.487494 2487790 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:27:00.487573 2487790 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:27:00.487645 2487790 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:27:00.487649 2487790 kubeadm.go:319] 
	I1101 09:27:00.487964 2487790 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:27:00.488051 2487790 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:27:00.488056 2487790 kubeadm.go:319] 
	I1101 09:27:00.488366 2487790 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ptao3u.2au6w3w3bslvp3bz \
	I1101 09:27:00.488479 2487790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:27:00.488733 2487790 kubeadm.go:319] 	--control-plane 
	I1101 09:27:00.488751 2487790 kubeadm.go:319] 
	I1101 09:27:00.489059 2487790 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:27:00.489071 2487790 kubeadm.go:319] 
	I1101 09:27:00.489417 2487790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ptao3u.2au6w3w3bslvp3bz \
	I1101 09:27:00.489878 2487790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:27:00.495632 2487790 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:27:00.495765 2487790 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:27:00.495845 2487790 cni.go:84] Creating CNI manager for ""
	I1101 09:27:00.495886 2487790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:27:00.501120 2487790 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:27:00.504054 2487790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:27:00.509414 2487790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 09:27:00.509433 2487790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:27:00.525857 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:27:01.559221 2487790 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.03330999s)
	I1101 09:27:01.559259 2487790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:27:01.559378 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:01.559447 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-068218 minikube.k8s.io/updated_at=2025_11_01T09_27_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=old-k8s-version-068218 minikube.k8s.io/primary=true
	I1101 09:27:01.779597 2487790 ops.go:34] apiserver oom_adj: -16
	I1101 09:27:01.779731 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:02.280143 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:02.779995 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:03.280513 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:03.780218 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:04.279973 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:04.779879 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:05.279925 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:05.780691 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:06.280295 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:06.780749 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:07.279895 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:07.780096 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:08.280716 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:08.780639 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:09.280314 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:09.779886 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:10.279997 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:10.780341 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:11.280619 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:11.780240 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:12.279999 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:12.780691 2487790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:12.911043 2487790 kubeadm.go:1114] duration metric: took 11.351707527s to wait for elevateKubeSystemPrivileges
	I1101 09:27:12.911074 2487790 kubeadm.go:403] duration metric: took 28.978623459s to StartCluster
	I1101 09:27:12.911091 2487790 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:12.911148 2487790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:27:12.912171 2487790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:12.912388 2487790 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:27:12.912550 2487790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:27:12.912805 2487790 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:27:12.912839 2487790 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:27:12.912897 2487790 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-068218"
	I1101 09:27:12.912911 2487790 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-068218"
	I1101 09:27:12.912934 2487790 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:27:12.913604 2487790 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-068218"
	I1101 09:27:12.913628 2487790 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-068218"
	I1101 09:27:12.913694 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:12.913950 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:12.917862 2487790 out.go:179] * Verifying Kubernetes components...
	I1101 09:27:12.922229 2487790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:27:12.960734 2487790 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-068218"
	I1101 09:27:12.960780 2487790 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:27:12.961186 2487790 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:12.968030 2487790 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:27:12.972128 2487790 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:27:12.972153 2487790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:27:12.972216 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:13.001866 2487790 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:27:13.001887 2487790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:27:13.001970 2487790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:13.052040 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:13.071573 2487790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36335 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:13.303672 2487790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:27:13.303843 2487790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:27:13.334405 2487790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:27:13.370924 2487790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:27:14.301665 2487790 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 09:27:14.303414 2487790 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:27:14.624809 2487790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.290362579s)
	I1101 09:27:14.624879 2487790 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253892929s)
	I1101 09:27:14.638623 2487790 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:27:14.641752 2487790 addons.go:515] duration metric: took 1.728810354s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:27:14.808582 2487790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-068218" context rescaled to 1 replicas
	W1101 09:27:16.306222 2487790 node_ready.go:57] node "old-k8s-version-068218" has "Ready":"False" status (will retry)
	W1101 09:27:18.307174 2487790 node_ready.go:57] node "old-k8s-version-068218" has "Ready":"False" status (will retry)
	W1101 09:27:20.806825 2487790 node_ready.go:57] node "old-k8s-version-068218" has "Ready":"False" status (will retry)
	W1101 09:27:22.806997 2487790 node_ready.go:57] node "old-k8s-version-068218" has "Ready":"False" status (will retry)
	W1101 09:27:24.807594 2487790 node_ready.go:57] node "old-k8s-version-068218" has "Ready":"False" status (will retry)
	I1101 09:27:26.809743 2487790 node_ready.go:49] node "old-k8s-version-068218" is "Ready"
	I1101 09:27:26.809772 2487790 node_ready.go:38] duration metric: took 12.506330673s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:27:26.809784 2487790 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:27:26.809843 2487790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:27:26.828031 2487790 api_server.go:72] duration metric: took 13.915602886s to wait for apiserver process to appear ...
	I1101 09:27:26.828053 2487790 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:27:26.828072 2487790 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:27:26.838930 2487790 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:27:26.841575 2487790 api_server.go:141] control plane version: v1.28.0
	I1101 09:27:26.841598 2487790 api_server.go:131] duration metric: took 13.538487ms to wait for apiserver health ...
	I1101 09:27:26.841607 2487790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:27:26.845120 2487790 system_pods.go:59] 8 kube-system pods found
	I1101 09:27:26.845152 2487790 system_pods.go:61] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:26.845159 2487790 system_pods.go:61] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running
	I1101 09:27:26.845164 2487790 system_pods.go:61] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:27:26.845173 2487790 system_pods.go:61] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running
	I1101 09:27:26.845178 2487790 system_pods.go:61] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running
	I1101 09:27:26.845182 2487790 system_pods.go:61] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:27:26.845198 2487790 system_pods.go:61] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running
	I1101 09:27:26.845204 2487790 system_pods.go:61] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:26.845210 2487790 system_pods.go:74] duration metric: took 3.598292ms to wait for pod list to return data ...
	I1101 09:27:26.845221 2487790 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:27:26.847235 2487790 default_sa.go:45] found service account: "default"
	I1101 09:27:26.847256 2487790 default_sa.go:55] duration metric: took 2.030532ms for default service account to be created ...
	I1101 09:27:26.847265 2487790 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:27:26.850492 2487790 system_pods.go:86] 8 kube-system pods found
	I1101 09:27:26.850522 2487790 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:26.850528 2487790 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running
	I1101 09:27:26.850534 2487790 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:27:26.850557 2487790 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running
	I1101 09:27:26.850572 2487790 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running
	I1101 09:27:26.850577 2487790 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:27:26.850581 2487790 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running
	I1101 09:27:26.850587 2487790 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:26.850608 2487790 retry.go:31] will retry after 215.072228ms: missing components: kube-dns
	I1101 09:27:27.071194 2487790 system_pods.go:86] 8 kube-system pods found
	I1101 09:27:27.071280 2487790 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:27.071302 2487790 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running
	I1101 09:27:27.071331 2487790 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:27:27.071358 2487790 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running
	I1101 09:27:27.071377 2487790 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running
	I1101 09:27:27.071396 2487790 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:27:27.071425 2487790 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running
	I1101 09:27:27.071459 2487790 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:27.071501 2487790 retry.go:31] will retry after 313.194708ms: missing components: kube-dns
	I1101 09:27:27.388986 2487790 system_pods.go:86] 8 kube-system pods found
	I1101 09:27:27.389019 2487790 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:27.389036 2487790 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running
	I1101 09:27:27.389044 2487790 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:27:27.389048 2487790 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running
	I1101 09:27:27.389054 2487790 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running
	I1101 09:27:27.389058 2487790 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:27:27.389063 2487790 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running
	I1101 09:27:27.389068 2487790 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:27.389090 2487790 retry.go:31] will retry after 411.942043ms: missing components: kube-dns
	I1101 09:27:27.805244 2487790 system_pods.go:86] 8 kube-system pods found
	I1101 09:27:27.805275 2487790 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Running
	I1101 09:27:27.805281 2487790 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running
	I1101 09:27:27.805287 2487790 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:27:27.805292 2487790 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running
	I1101 09:27:27.805297 2487790 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running
	I1101 09:27:27.805301 2487790 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:27:27.805337 2487790 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running
	I1101 09:27:27.805348 2487790 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Running
	I1101 09:27:27.805358 2487790 system_pods.go:126] duration metric: took 958.088151ms to wait for k8s-apps to be running ...
	I1101 09:27:27.805370 2487790 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:27:27.805436 2487790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:27:27.818909 2487790 system_svc.go:56] duration metric: took 13.528626ms WaitForService to wait for kubelet
	I1101 09:27:27.818938 2487790 kubeadm.go:587] duration metric: took 14.906527065s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:27:27.818957 2487790 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:27:27.825461 2487790 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:27:27.825495 2487790 node_conditions.go:123] node cpu capacity is 2
	I1101 09:27:27.825512 2487790 node_conditions.go:105] duration metric: took 6.549151ms to run NodePressure ...
	I1101 09:27:27.825525 2487790 start.go:242] waiting for startup goroutines ...
	I1101 09:27:27.825536 2487790 start.go:247] waiting for cluster config update ...
	I1101 09:27:27.825547 2487790 start.go:256] writing updated cluster config ...
	I1101 09:27:27.825847 2487790 ssh_runner.go:195] Run: rm -f paused
	I1101 09:27:27.829836 2487790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:27:27.839545 2487790 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.844681 2487790 pod_ready.go:94] pod "coredns-5dd5756b68-b4f66" is "Ready"
	I1101 09:27:27.844708 2487790 pod_ready.go:86] duration metric: took 5.139903ms for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.847880 2487790 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.852459 2487790 pod_ready.go:94] pod "etcd-old-k8s-version-068218" is "Ready"
	I1101 09:27:27.852488 2487790 pod_ready.go:86] duration metric: took 4.555081ms for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.855659 2487790 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.860476 2487790 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-068218" is "Ready"
	I1101 09:27:27.860506 2487790 pod_ready.go:86] duration metric: took 4.820395ms for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:27.864219 2487790 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:28.233754 2487790 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-068218" is "Ready"
	I1101 09:27:28.233790 2487790 pod_ready.go:86] duration metric: took 369.545483ms for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:28.435031 2487790 pod_ready.go:83] waiting for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:28.840218 2487790 pod_ready.go:94] pod "kube-proxy-9574h" is "Ready"
	I1101 09:27:28.840292 2487790 pod_ready.go:86] duration metric: took 405.231376ms for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:29.035405 2487790 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:29.434565 2487790 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-068218" is "Ready"
	I1101 09:27:29.434638 2487790 pod_ready.go:86] duration metric: took 399.197796ms for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:27:29.434667 2487790 pod_ready.go:40] duration metric: took 1.604785087s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:27:29.491428 2487790 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 09:27:29.494684 2487790 out.go:203] 
	W1101 09:27:29.497588 2487790 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:27:29.500613 2487790 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:27:29.503647 2487790 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-068218" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:27:27 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:27.16663179Z" level=info msg="Created container 3d8e807e0dac75e8933cba6495ae6c551811fd2be5dc12952466aaad7f53e47e: kube-system/coredns-5dd5756b68-b4f66/coredns" id=9848eec4-6a0b-441b-b2a9-8293687b4d83 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:27:27 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:27.167533877Z" level=info msg="Starting container: 3d8e807e0dac75e8933cba6495ae6c551811fd2be5dc12952466aaad7f53e47e" id=b77a3010-4a6e-4e59-9f19-8b3a3846879b name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:27:27 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:27.169766283Z" level=info msg="Started container" PID=1927 containerID=3d8e807e0dac75e8933cba6495ae6c551811fd2be5dc12952466aaad7f53e47e description=kube-system/coredns-5dd5756b68-b4f66/coredns id=b77a3010-4a6e-4e59-9f19-8b3a3846879b name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf2be4f7b73ea23f10f0474fa4607a4c84c6a096c1af5eb8ee227b024e770b57
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.022918551Z" level=info msg="Running pod sandbox: default/busybox/POD" id=81b339df-c56b-458d-a2a6-ab9118c1cd98 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.023003677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.030395029Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188 UID:aca786ff-1a58-408b-98dc-1f5b4e71eb07 NetNS:/var/run/netns/757b63b6-8cbe-4a17-a200-4247e0d85296 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c8c8}] Aliases:map[]}"
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.030590043Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.04213684Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188 UID:aca786ff-1a58-408b-98dc-1f5b4e71eb07 NetNS:/var/run/netns/757b63b6-8cbe-4a17-a200-4247e0d85296 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c8c8}] Aliases:map[]}"
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.042305787Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.048045238Z" level=info msg="Ran pod sandbox b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188 with infra container: default/busybox/POD" id=81b339df-c56b-458d-a2a6-ab9118c1cd98 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.049267457Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d774b179-3ba5-4539-87ca-b93b0b8ebd81 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.04940808Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d774b179-3ba5-4539-87ca-b93b0b8ebd81 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.049460231Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d774b179-3ba5-4539-87ca-b93b0b8ebd81 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.050124688Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1f14476-69a5-4a8a-8bbd-42744d56e77e name=/runtime.v1.ImageService/PullImage
	Nov 01 09:27:30 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:30.052904533Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.171803223Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b1f14476-69a5-4a8a-8bbd-42744d56e77e name=/runtime.v1.ImageService/PullImage
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.173110576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4071e673-59c8-4c1c-adc1-c0709ee056f5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.174980489Z" level=info msg="Creating container: default/busybox/busybox" id=24eccc02-338e-4e2f-b239-4dd9215944ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.175102742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.182611407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.183076847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.199964575Z" level=info msg="Created container 21916fa81fb12f681386d21e79178ebc9c0ca4ac1dec5d2ff1518abec330adce: default/busybox/busybox" id=24eccc02-338e-4e2f-b239-4dd9215944ef name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.200652031Z" level=info msg="Starting container: 21916fa81fb12f681386d21e79178ebc9c0ca4ac1dec5d2ff1518abec330adce" id=a20dab91-591f-49e3-931d-d4a33ed194f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:27:32 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:32.202884617Z" level=info msg="Started container" PID=1984 containerID=21916fa81fb12f681386d21e79178ebc9c0ca4ac1dec5d2ff1518abec330adce description=default/busybox/busybox id=a20dab91-591f-49e3-931d-d4a33ed194f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188
	Nov 01 09:27:38 old-k8s-version-068218 crio[838]: time="2025-11-01T09:27:38.90997388Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	21916fa81fb12       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   b974ea6190398       busybox                                          default
	3d8e807e0dac7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   cf2be4f7b73ea       coredns-5dd5756b68-b4f66                         kube-system
	9afedf70a0da8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   0d999478c6421       storage-provisioner                              kube-system
	9c0c7ff5fd40e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   77ae5673bf986       kindnet-8ks7s                                    kube-system
	4f9fb2a4d05fc       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   8b4d38cf62f5f       kube-proxy-9574h                                 kube-system
	04298184a9f4f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   040dce00947b3       kube-scheduler-old-k8s-version-068218            kube-system
	f3a252f017d62       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   0fd067d0b2c32       kube-apiserver-old-k8s-version-068218            kube-system
	0858b6644088b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   2660c93184c03       kube-controller-manager-old-k8s-version-068218   kube-system
	f203d8fcec772       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   55a5562f0fb6b       etcd-old-k8s-version-068218                      kube-system
	
	
	==> coredns [3d8e807e0dac75e8933cba6495ae6c551811fd2be5dc12952466aaad7f53e47e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47226 - 26550 "HINFO IN 5694218211141656404.6278630751044975317. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022052007s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-068218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-068218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-068218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_27_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:26:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-068218
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:27:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:27:31 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:27:31 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:27:31 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:27:31 +0000   Sat, 01 Nov 2025 09:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-068218
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                84351bdd-8654-4943-b8ea-c75bd6268b89
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-b4f66                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-068218                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-8ks7s                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-068218             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-068218    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-9574h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-068218             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x9 over 48s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-068218 event: Registered Node old-k8s-version-068218 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-068218 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 09:02] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f203d8fcec772e4fe8d438a64f1f799f795db71af2632f4039be34d26a575905] <==
	{"level":"info","ts":"2025-11-01T09:26:53.08799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T09:26:53.091984Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T09:26:53.10004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:26:53.100206Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:26:53.100345Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:26:53.104902Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:26:53.104824Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:26:53.94267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-01T09:26:53.942726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-01T09:26:53.942745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-01T09:26:53.942758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:26:53.94277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T09:26:53.94278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-01T09:26:53.942788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T09:26:53.947268Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:26:53.94757Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-068218 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:26:53.947629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:26:53.948422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:26:53.96018Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:26:53.960281Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:26:53.95288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:26:53.961263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T09:26:53.956568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:26:53.963952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:26:53.967912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:27:40 up 18:10,  0 user,  load average: 3.47, 3.91, 2.97
	Linux old-k8s-version-068218 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c0c7ff5fd40e8d30ac4ad75b1827a336f6059a90e589772645356ce79650f15] <==
	I1101 09:27:16.251115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:27:16.251346       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:27:16.251489       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:27:16.251512       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:27:16.251526       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:27:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:27:16.455890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:27:16.455979       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:27:16.456015       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:27:16.456151       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:27:16.656249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:27:16.656362       1 metrics.go:72] Registering metrics
	I1101 09:27:16.656481       1 controller.go:711] "Syncing nftables rules"
	I1101 09:27:26.456840       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:27:26.456894       1 main.go:301] handling current node
	I1101 09:27:36.453049       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:27:36.453158       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f3a252f017d62536fa257e43514138acf2e538323972d9a00ca79fc51df7d725] <==
	I1101 09:26:56.928297       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:26:56.928432       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:26:56.928492       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:26:56.928548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:26:56.928578       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:26:56.929289       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:26:56.932166       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:26:56.933987       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:26:56.934047       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:26:56.984372       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:26:57.624501       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:26:57.630662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:26:57.630683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:26:58.213823       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:26:58.265197       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:26:58.365387       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:26:58.372051       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:26:58.373134       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:26:58.377663       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:26:58.876165       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:27:00.293008       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:27:00.343075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:27:00.355172       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 09:27:12.529576       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:27:12.641137       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0858b6644088b8e79cd8810867cd76430cd1c0c730f4e9db3f11f0fd0d57a692] <==
	I1101 09:27:12.557811       1 shared_informer.go:318] Caches are synced for disruption
	I1101 09:27:12.565636       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8ks7s"
	I1101 09:27:12.568145       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:27:12.571812       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9574h"
	I1101 09:27:12.626876       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:27:12.647005       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 09:27:12.755013       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l2pfh"
	I1101 09:27:12.769325       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-b4f66"
	I1101 09:27:12.785066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.080578ms"
	I1101 09:27:12.803327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.060175ms"
	I1101 09:27:12.824315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.916146ms"
	I1101 09:27:12.824433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.332µs"
	I1101 09:27:12.977687       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:27:12.977717       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:27:13.040747       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:27:14.358659       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 09:27:14.385423       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l2pfh"
	I1101 09:27:14.413702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.574721ms"
	I1101 09:27:14.435676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.930558ms"
	I1101 09:27:14.435922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="148.705µs"
	I1101 09:27:26.781261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.026µs"
	I1101 09:27:26.798832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="322.986µs"
	I1101 09:27:27.484789       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 09:27:27.737287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.067131ms"
	I1101 09:27:27.737403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.344µs"
	
	
	==> kube-proxy [4f9fb2a4d05fcff7cd140ac13ab6df721bec5209e5c2ecacac160f10948f2261] <==
	I1101 09:27:13.264490       1 server_others.go:69] "Using iptables proxy"
	I1101 09:27:13.306796       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 09:27:13.339294       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:27:13.342406       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:27:13.342439       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:27:13.342446       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:27:13.342476       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:27:13.342675       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:27:13.342685       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:27:13.344875       1 config.go:188] "Starting service config controller"
	I1101 09:27:13.344921       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:27:13.344943       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:27:13.344947       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:27:13.352848       1 config.go:315] "Starting node config controller"
	I1101 09:27:13.352864       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:27:13.445455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:27:13.445498       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:27:13.455945       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [04298184a9f4f14fb389a69e10f5c65dc0379adb7d356a66dad384851480c692] <==
	W1101 09:26:56.934984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 09:26:56.935052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 09:26:56.935142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 09:26:56.935179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 09:26:56.935256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 09:26:56.935292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 09:26:56.935397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 09:26:56.935434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 09:26:56.935503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 09:26:56.935537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 09:26:56.935638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 09:26:56.935677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 09:26:57.826039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 09:26:57.826076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 09:26:57.846483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 09:26:57.846517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 09:26:57.946785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 09:26:57.946815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 09:26:58.007906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 09:26:58.008055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 09:26:58.008384       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 09:26:58.008638       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:26:58.010338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 09:26:58.010444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1101 09:26:59.724979       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.591736    1368 topology_manager.go:215] "Topology Admit Handler" podUID="23a5f11d-f074-4c54-a831-2ec6b7220d73" podNamespace="kube-system" podName="kube-proxy-9574h"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624284    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eeb1ffb-51f8-4229-bf9c-6457fdc0eede-lib-modules\") pod \"kindnet-8ks7s\" (UID: \"7eeb1ffb-51f8-4229-bf9c-6457fdc0eede\") " pod="kube-system/kindnet-8ks7s"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624338    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eeb1ffb-51f8-4229-bf9c-6457fdc0eede-xtables-lock\") pod \"kindnet-8ks7s\" (UID: \"7eeb1ffb-51f8-4229-bf9c-6457fdc0eede\") " pod="kube-system/kindnet-8ks7s"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624366    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb86j\" (UniqueName: \"kubernetes.io/projected/23a5f11d-f074-4c54-a831-2ec6b7220d73-kube-api-access-mb86j\") pod \"kube-proxy-9574h\" (UID: \"23a5f11d-f074-4c54-a831-2ec6b7220d73\") " pod="kube-system/kube-proxy-9574h"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624391    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7eeb1ffb-51f8-4229-bf9c-6457fdc0eede-cni-cfg\") pod \"kindnet-8ks7s\" (UID: \"7eeb1ffb-51f8-4229-bf9c-6457fdc0eede\") " pod="kube-system/kindnet-8ks7s"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624418    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpzgd\" (UniqueName: \"kubernetes.io/projected/7eeb1ffb-51f8-4229-bf9c-6457fdc0eede-kube-api-access-vpzgd\") pod \"kindnet-8ks7s\" (UID: \"7eeb1ffb-51f8-4229-bf9c-6457fdc0eede\") " pod="kube-system/kindnet-8ks7s"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624440    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23a5f11d-f074-4c54-a831-2ec6b7220d73-xtables-lock\") pod \"kube-proxy-9574h\" (UID: \"23a5f11d-f074-4c54-a831-2ec6b7220d73\") " pod="kube-system/kube-proxy-9574h"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624462    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23a5f11d-f074-4c54-a831-2ec6b7220d73-kube-proxy\") pod \"kube-proxy-9574h\" (UID: \"23a5f11d-f074-4c54-a831-2ec6b7220d73\") " pod="kube-system/kube-proxy-9574h"
	Nov 01 09:27:12 old-k8s-version-068218 kubelet[1368]: I1101 09:27:12.624486    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23a5f11d-f074-4c54-a831-2ec6b7220d73-lib-modules\") pod \"kube-proxy-9574h\" (UID: \"23a5f11d-f074-4c54-a831-2ec6b7220d73\") " pod="kube-system/kube-proxy-9574h"
	Nov 01 09:27:16 old-k8s-version-068218 kubelet[1368]: I1101 09:27:16.681255    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9574h" podStartSLOduration=4.681202037 podCreationTimestamp="2025-11-01 09:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:27:13.673257667 +0000 UTC m=+13.533499441" watchObservedRunningTime="2025-11-01 09:27:16.681202037 +0000 UTC m=+16.541443803"
	Nov 01 09:27:20 old-k8s-version-068218 kubelet[1368]: I1101 09:27:20.459315    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8ks7s" podStartSLOduration=5.311994298 podCreationTimestamp="2025-11-01 09:27:12 +0000 UTC" firstStartedPulling="2025-11-01 09:27:12.918615609 +0000 UTC m=+12.778857375" lastFinishedPulling="2025-11-01 09:27:16.06589527 +0000 UTC m=+15.926137036" observedRunningTime="2025-11-01 09:27:16.682067662 +0000 UTC m=+16.542309436" watchObservedRunningTime="2025-11-01 09:27:20.459273959 +0000 UTC m=+20.319515733"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.748383    1368 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.779017    1368 topology_manager.go:215] "Topology Admit Handler" podUID="6758b28d-65e8-4750-8150-214984beb6a2" podNamespace="kube-system" podName="coredns-5dd5756b68-b4f66"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.786351    1368 topology_manager.go:215] "Topology Admit Handler" podUID="2cf435bc-9907-4482-a9ba-eee3b7afe7d2" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.864516    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2dv\" (UniqueName: \"kubernetes.io/projected/6758b28d-65e8-4750-8150-214984beb6a2-kube-api-access-5r2dv\") pod \"coredns-5dd5756b68-b4f66\" (UID: \"6758b28d-65e8-4750-8150-214984beb6a2\") " pod="kube-system/coredns-5dd5756b68-b4f66"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.864738    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6758b28d-65e8-4750-8150-214984beb6a2-config-volume\") pod \"coredns-5dd5756b68-b4f66\" (UID: \"6758b28d-65e8-4750-8150-214984beb6a2\") " pod="kube-system/coredns-5dd5756b68-b4f66"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.965845    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2cf435bc-9907-4482-a9ba-eee3b7afe7d2-tmp\") pod \"storage-provisioner\" (UID: \"2cf435bc-9907-4482-a9ba-eee3b7afe7d2\") " pod="kube-system/storage-provisioner"
	Nov 01 09:27:26 old-k8s-version-068218 kubelet[1368]: I1101 09:27:26.965924    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6xp\" (UniqueName: \"kubernetes.io/projected/2cf435bc-9907-4482-a9ba-eee3b7afe7d2-kube-api-access-dh6xp\") pod \"storage-provisioner\" (UID: \"2cf435bc-9907-4482-a9ba-eee3b7afe7d2\") " pod="kube-system/storage-provisioner"
	Nov 01 09:27:27 old-k8s-version-068218 kubelet[1368]: W1101 09:27:27.098535    1368 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-0d999478c6421fc941903f7f5a19327bc11595947632130779ff3e7658508b20 WatchSource:0}: Error finding container 0d999478c6421fc941903f7f5a19327bc11595947632130779ff3e7658508b20: Status 404 returned error can't find the container with id 0d999478c6421fc941903f7f5a19327bc11595947632130779ff3e7658508b20
	Nov 01 09:27:27 old-k8s-version-068218 kubelet[1368]: I1101 09:27:27.719422    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b4f66" podStartSLOduration=15.719349976 podCreationTimestamp="2025-11-01 09:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:27:27.718645528 +0000 UTC m=+27.578887302" watchObservedRunningTime="2025-11-01 09:27:27.719349976 +0000 UTC m=+27.579591742"
	Nov 01 09:27:27 old-k8s-version-068218 kubelet[1368]: I1101 09:27:27.719531    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.719512826999999 podCreationTimestamp="2025-11-01 09:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:27:27.70407351 +0000 UTC m=+27.564315275" watchObservedRunningTime="2025-11-01 09:27:27.719512827 +0000 UTC m=+27.579754592"
	Nov 01 09:27:29 old-k8s-version-068218 kubelet[1368]: I1101 09:27:29.721157    1368 topology_manager.go:215] "Topology Admit Handler" podUID="aca786ff-1a58-408b-98dc-1f5b4e71eb07" podNamespace="default" podName="busybox"
	Nov 01 09:27:29 old-k8s-version-068218 kubelet[1368]: I1101 09:27:29.882802    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wksh\" (UniqueName: \"kubernetes.io/projected/aca786ff-1a58-408b-98dc-1f5b4e71eb07-kube-api-access-9wksh\") pod \"busybox\" (UID: \"aca786ff-1a58-408b-98dc-1f5b4e71eb07\") " pod="default/busybox"
	Nov 01 09:27:30 old-k8s-version-068218 kubelet[1368]: W1101 09:27:30.044513    1368 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188 WatchSource:0}: Error finding container b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188: Status 404 returned error can't find the container with id b974ea6190398e08bb1934219b8b0c512b3a0044ae919884172740e4a8a29188
	Nov 01 09:27:32 old-k8s-version-068218 kubelet[1368]: I1101 09:27:32.717592    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5948151510000002 podCreationTimestamp="2025-11-01 09:27:29 +0000 UTC" firstStartedPulling="2025-11-01 09:27:30.049673632 +0000 UTC m=+29.909915398" lastFinishedPulling="2025-11-01 09:27:32.172409656 +0000 UTC m=+32.032651421" observedRunningTime="2025-11-01 09:27:32.716744181 +0000 UTC m=+32.576985955" watchObservedRunningTime="2025-11-01 09:27:32.717551174 +0000 UTC m=+32.577792940"
	
	
	==> storage-provisioner [9afedf70a0da8927e65be5cc714429c0afd6657f16a0f4c823286d034c160465] <==
	I1101 09:27:27.176270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:27:27.208871       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:27:27.208930       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:27:27.217714       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:27:27.218538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b5020d22-8af6-4c8a-9736-6043321dbdc9!
	I1101 09:27:27.218355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ce5421b-f133-4a3c-9fef-747d273e5cf2", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-068218_b5020d22-8af6-4c8a-9736-6043321dbdc9 became leader
	I1101 09:27:27.318915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b5020d22-8af6-4c8a-9736-6043321dbdc9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-068218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-068218 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-068218 --alsologtostderr -v=1: exit status 80 (2.040364454s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-068218 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:28:53.748308 2493779 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:53.748419 2493779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:53.748429 2493779 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:53.748434 2493779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:53.748678 2493779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:28:53.748989 2493779 out.go:368] Setting JSON to false
	I1101 09:28:53.749017 2493779 mustload.go:66] Loading cluster: old-k8s-version-068218
	I1101 09:28:53.749448 2493779 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:28:53.750079 2493779 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:53.772800 2493779 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:53.773122 2493779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:53.845687 2493779 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 09:28:53.831400013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:28:53.846335 2493779 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-068218 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:28:53.851356 2493779 out.go:179] * Pausing node old-k8s-version-068218 ... 
	I1101 09:28:53.856474 2493779 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:53.856857 2493779 ssh_runner.go:195] Run: systemctl --version
	I1101 09:28:53.856912 2493779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:53.873506 2493779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:53.978165 2493779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:53.990375 2493779 pause.go:52] kubelet running: true
	I1101 09:28:53.990442 2493779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:28:54.231026 2493779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:28:54.231117 2493779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:28:54.296041 2493779 cri.go:89] found id: "aa7739ac6e46f17ba37552d3aad001e0f45adf530a865a141ec3e994a46cee75"
	I1101 09:28:54.296065 2493779 cri.go:89] found id: "3762d722428dc59ef53f0455f537bb438e72cf8437c310c1a43dd9b5f7b7fb14"
	I1101 09:28:54.296070 2493779 cri.go:89] found id: "3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2"
	I1101 09:28:54.296074 2493779 cri.go:89] found id: "e915ac6e3880e2ad0729af6f8b7d39ad7dac08fd8419522abb00a0450855afa9"
	I1101 09:28:54.296078 2493779 cri.go:89] found id: "eec280948389885da1b27c55ff4b58fbb0c1a0294e5d4c42be0a4b9d1da3ad5c"
	I1101 09:28:54.296081 2493779 cri.go:89] found id: "fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5"
	I1101 09:28:54.296084 2493779 cri.go:89] found id: "067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8"
	I1101 09:28:54.296087 2493779 cri.go:89] found id: "74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749"
	I1101 09:28:54.296090 2493779 cri.go:89] found id: "7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4"
	I1101 09:28:54.296122 2493779 cri.go:89] found id: "a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	I1101 09:28:54.296130 2493779 cri.go:89] found id: "af4e0baac89bbf43554732d2ba200bf33d3c88daff4b532594b9253c2c92686f"
	I1101 09:28:54.296133 2493779 cri.go:89] found id: ""
	I1101 09:28:54.296190 2493779 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:28:54.314736 2493779 retry.go:31] will retry after 325.957793ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:54Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:28:54.641352 2493779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:54.661340 2493779 pause.go:52] kubelet running: false
	I1101 09:28:54.661446 2493779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:28:54.825119 2493779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:28:54.825237 2493779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:28:54.898411 2493779 cri.go:89] found id: "aa7739ac6e46f17ba37552d3aad001e0f45adf530a865a141ec3e994a46cee75"
	I1101 09:28:54.898435 2493779 cri.go:89] found id: "3762d722428dc59ef53f0455f537bb438e72cf8437c310c1a43dd9b5f7b7fb14"
	I1101 09:28:54.898440 2493779 cri.go:89] found id: "3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2"
	I1101 09:28:54.898447 2493779 cri.go:89] found id: "e915ac6e3880e2ad0729af6f8b7d39ad7dac08fd8419522abb00a0450855afa9"
	I1101 09:28:54.898451 2493779 cri.go:89] found id: "eec280948389885da1b27c55ff4b58fbb0c1a0294e5d4c42be0a4b9d1da3ad5c"
	I1101 09:28:54.898454 2493779 cri.go:89] found id: "fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5"
	I1101 09:28:54.898458 2493779 cri.go:89] found id: "067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8"
	I1101 09:28:54.898462 2493779 cri.go:89] found id: "74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749"
	I1101 09:28:54.898478 2493779 cri.go:89] found id: "7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4"
	I1101 09:28:54.898498 2493779 cri.go:89] found id: "a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	I1101 09:28:54.898506 2493779 cri.go:89] found id: "af4e0baac89bbf43554732d2ba200bf33d3c88daff4b532594b9253c2c92686f"
	I1101 09:28:54.898509 2493779 cri.go:89] found id: ""
	I1101 09:28:54.898560 2493779 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:28:54.909190 2493779 retry.go:31] will retry after 538.932641ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:54Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:28:55.449038 2493779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:55.461754 2493779 pause.go:52] kubelet running: false
	I1101 09:28:55.461816 2493779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:28:55.635815 2493779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:28:55.635908 2493779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:28:55.701160 2493779 cri.go:89] found id: "aa7739ac6e46f17ba37552d3aad001e0f45adf530a865a141ec3e994a46cee75"
	I1101 09:28:55.701181 2493779 cri.go:89] found id: "3762d722428dc59ef53f0455f537bb438e72cf8437c310c1a43dd9b5f7b7fb14"
	I1101 09:28:55.701186 2493779 cri.go:89] found id: "3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2"
	I1101 09:28:55.701189 2493779 cri.go:89] found id: "e915ac6e3880e2ad0729af6f8b7d39ad7dac08fd8419522abb00a0450855afa9"
	I1101 09:28:55.701192 2493779 cri.go:89] found id: "eec280948389885da1b27c55ff4b58fbb0c1a0294e5d4c42be0a4b9d1da3ad5c"
	I1101 09:28:55.701196 2493779 cri.go:89] found id: "fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5"
	I1101 09:28:55.701204 2493779 cri.go:89] found id: "067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8"
	I1101 09:28:55.701207 2493779 cri.go:89] found id: "74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749"
	I1101 09:28:55.701210 2493779 cri.go:89] found id: "7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4"
	I1101 09:28:55.701216 2493779 cri.go:89] found id: "a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	I1101 09:28:55.701219 2493779 cri.go:89] found id: "af4e0baac89bbf43554732d2ba200bf33d3c88daff4b532594b9253c2c92686f"
	I1101 09:28:55.701222 2493779 cri.go:89] found id: ""
	I1101 09:28:55.701268 2493779 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:28:55.715595 2493779 out.go:203] 
	W1101 09:28:55.718848 2493779 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:28:55.718910 2493779 out.go:285] * 
	* 
	W1101 09:28:55.731237 2493779 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:28:55.734467 2493779 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-068218 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-068218
helpers_test.go:243: (dbg) docker inspect old-k8s-version-068218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	        "Created": "2025-11-01T09:26:34.668923657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2491686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:27:53.905932784Z",
	            "FinishedAt": "2025-11-01T09:27:53.092987369Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hosts",
	        "LogPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4-json.log",
	        "Name": "/old-k8s-version-068218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-068218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-068218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	                "LowerDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-068218",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-068218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-068218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cbb920d089d4bef3b517ef1aad6863dcf9b559b5e3fb163268e6477284529fb3",
	            "SandboxKey": "/var/run/docker/netns/cbb920d089d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36344"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36342"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36343"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-068218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:4c:5c:92:74:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e195285262e64d3b782d7abf538ceec14d34fc8c1e31d12d18b21428d3b9ea34",
	                    "EndpointID": "f5bc9f461c409f14623bd01c82157352db4928b20d9c9d7ceb4ca13c5a5de4b3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-068218",
	                        "e88ec4f29f18"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218: exit status 2 (343.95909ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25: (1.308561158s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-206273 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo containerd config dump                                                                                                                                                                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo crio config                                                                                                                                                                                                             │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p cilium-206273                                                                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p force-systemd-env-778652                                                                                                                                                                                                                   │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ delete  │ -p pause-951206                                                                                                                                                                                                                               │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ cert-options-578478 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:27:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:27:53.625517 2491559 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:27:53.625647 2491559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:53.625658 2491559 out.go:374] Setting ErrFile to fd 2...
	I1101 09:27:53.625663 2491559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:53.625921 2491559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:27:53.626442 2491559 out.go:368] Setting JSON to false
	I1101 09:27:53.627391 2491559 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65420,"bootTime":1761923854,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:27:53.627455 2491559 start.go:143] virtualization:  
	I1101 09:27:53.630368 2491559 out.go:179] * [old-k8s-version-068218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:27:53.634231 2491559 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:27:53.634379 2491559 notify.go:221] Checking for updates...
	I1101 09:27:53.640032 2491559 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:27:53.643011 2491559 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:27:53.646002 2491559 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:27:53.648890 2491559 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:27:53.651839 2491559 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:27:53.655535 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:27:53.659041 2491559 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:27:53.661876 2491559 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:27:53.695700 2491559 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:27:53.695955 2491559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:27:53.749502 2491559 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:27:53.740615043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:27:53.749608 2491559 docker.go:319] overlay module found
	I1101 09:27:53.752734 2491559 out.go:179] * Using the docker driver based on existing profile
	I1101 09:27:53.755591 2491559 start.go:309] selected driver: docker
	I1101 09:27:53.755610 2491559 start.go:930] validating driver "docker" against &{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:53.755717 2491559 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:27:53.756467 2491559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:27:53.813331 2491559 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:27:53.804648909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:27:53.813672 2491559 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:27:53.813706 2491559 cni.go:84] Creating CNI manager for ""
	I1101 09:27:53.813758 2491559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:27:53.813800 2491559 start.go:353] cluster config:
	{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:53.817074 2491559 out.go:179] * Starting "old-k8s-version-068218" primary control-plane node in "old-k8s-version-068218" cluster
	I1101 09:27:53.820028 2491559 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:27:53.822994 2491559 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:27:53.825844 2491559 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:27:53.825922 2491559 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:27:53.825923 2491559 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:27:53.825936 2491559 cache.go:59] Caching tarball of preloaded images
	I1101 09:27:53.826094 2491559 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:27:53.826108 2491559 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:27:53.826283 2491559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:27:53.853749 2491559 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:27:53.853768 2491559 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:27:53.853788 2491559 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:27:53.853821 2491559 start.go:360] acquireMachinesLock for old-k8s-version-068218: {Name:mkfc282fcc0d94abffeef2a346c8ebfcf87a3759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:27:53.853881 2491559 start.go:364] duration metric: took 42.649µs to acquireMachinesLock for "old-k8s-version-068218"
	I1101 09:27:53.853901 2491559 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:27:53.853906 2491559 fix.go:54] fixHost starting: 
	I1101 09:27:53.854165 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:53.871456 2491559 fix.go:112] recreateIfNeeded on old-k8s-version-068218: state=Stopped err=<nil>
	W1101 09:27:53.871484 2491559 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:27:53.874677 2491559 out.go:252] * Restarting existing docker container for "old-k8s-version-068218" ...
	I1101 09:27:53.874767 2491559 cli_runner.go:164] Run: docker start old-k8s-version-068218
	I1101 09:27:54.127124 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:54.146460 2491559 kic.go:430] container "old-k8s-version-068218" state is running.
	I1101 09:27:54.147002 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:54.171491 2491559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:27:54.171869 2491559 machine.go:94] provisionDockerMachine start ...
	I1101 09:27:54.171945 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:54.195097 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:54.195408 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:54.195416 2491559 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:27:54.196107 2491559 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36808->127.0.0.1:36340: read: connection reset by peer
	I1101 09:27:57.347340 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:27:57.347362 2491559 ubuntu.go:182] provisioning hostname "old-k8s-version-068218"
	I1101 09:27:57.347429 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:57.365421 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:57.365737 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:57.365759 2491559 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-068218 && echo "old-k8s-version-068218" | sudo tee /etc/hostname
	I1101 09:27:57.530134 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:27:57.530231 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:57.548953 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:57.549274 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:57.549332 2491559 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-068218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-068218/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-068218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:27:57.700021 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:27:57.700051 2491559 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:27:57.700079 2491559 ubuntu.go:190] setting up certificates
	I1101 09:27:57.700089 2491559 provision.go:84] configureAuth start
	I1101 09:27:57.700148 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:57.718939 2491559 provision.go:143] copyHostCerts
	I1101 09:27:57.719009 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:27:57.719031 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:27:57.719111 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:27:57.719220 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:27:57.719231 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:27:57.719258 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:27:57.719312 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:27:57.719320 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:27:57.719343 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:27:57.719393 2491559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-068218 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-068218]
	I1101 09:27:58.354793 2491559 provision.go:177] copyRemoteCerts
	I1101 09:27:58.354859 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:27:58.354905 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.372424 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:58.479284 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:27:58.496836 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:27:58.514966 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:27:58.531685 2491559 provision.go:87] duration metric: took 831.571466ms to configureAuth
	I1101 09:27:58.531709 2491559 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:27:58.531937 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:27:58.532044 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.549182 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:58.549572 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:58.549589 2491559 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:27:58.872239 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:27:58.872258 2491559 machine.go:97] duration metric: took 4.700375122s to provisionDockerMachine
	I1101 09:27:58.872269 2491559 start.go:293] postStartSetup for "old-k8s-version-068218" (driver="docker")
	I1101 09:27:58.872279 2491559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:27:58.872335 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:27:58.872403 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.892562 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:58.995539 2491559 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:27:58.998729 2491559 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:27:58.998756 2491559 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:27:58.998767 2491559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:27:58.998818 2491559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:27:58.998932 2491559 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:27:58.999040 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:27:59.007035 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:27:59.026617 2491559 start.go:296] duration metric: took 154.332987ms for postStartSetup
	I1101 09:27:59.026745 2491559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:27:59.026790 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.048855 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.148932 2491559 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:27:59.153919 2491559 fix.go:56] duration metric: took 5.300007009s for fixHost
	I1101 09:27:59.153942 2491559 start.go:83] releasing machines lock for "old-k8s-version-068218", held for 5.300052423s
	I1101 09:27:59.154023 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:59.171770 2491559 ssh_runner.go:195] Run: cat /version.json
	I1101 09:27:59.171830 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.171931 2491559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:27:59.171984 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.190436 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.213046 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.402539 2491559 ssh_runner.go:195] Run: systemctl --version
	I1101 09:27:59.408890 2491559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:27:59.446353 2491559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:27:59.450718 2491559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:27:59.450791 2491559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:27:59.458824 2491559 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:27:59.458852 2491559 start.go:496] detecting cgroup driver to use...
	I1101 09:27:59.458883 2491559 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:27:59.458940 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:27:59.474085 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:27:59.487016 2491559 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:27:59.487116 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:27:59.503008 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:27:59.516520 2491559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:27:59.627273 2491559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:27:59.742132 2491559 docker.go:234] disabling docker service ...
	I1101 09:27:59.742222 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:27:59.759991 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:27:59.773316 2491559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:27:59.895613 2491559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:28:00.025382 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:28:00.049339 2491559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:28:00.074272 2491559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:28:00.074358 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.099294 2491559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:28:00.099387 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.123787 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.150254 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.177995 2491559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:28:00.204108 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.259076 2491559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.278494 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.292127 2491559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:28:00.305512 2491559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:28:00.339980 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:00.550308 2491559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:28:00.697410 2491559 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:28:00.697478 2491559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:28:00.701366 2491559 start.go:564] Will wait 60s for crictl version
	I1101 09:28:00.701432 2491559 ssh_runner.go:195] Run: which crictl
	I1101 09:28:00.704871 2491559 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:28:00.729651 2491559 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:28:00.729752 2491559 ssh_runner.go:195] Run: crio --version
	I1101 09:28:00.760059 2491559 ssh_runner.go:195] Run: crio --version
	I1101 09:28:00.793535 2491559 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 09:28:00.796580 2491559 cli_runner.go:164] Run: docker network inspect old-k8s-version-068218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:28:00.811545 2491559 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:28:00.815468 2491559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:28:00.824845 2491559 kubeadm.go:884] updating cluster {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:28:00.824954 2491559 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:00.825011 2491559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:28:00.859837 2491559 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:28:00.859887 2491559 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:28:00.859943 2491559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:28:00.887443 2491559 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:28:00.887471 2491559 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:28:00.887479 2491559 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 09:28:00.887580 2491559 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-068218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:28:00.887663 2491559 ssh_runner.go:195] Run: crio config
	I1101 09:28:00.951649 2491559 cni.go:84] Creating CNI manager for ""
	I1101 09:28:00.951718 2491559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:00.951755 2491559 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:28:00.951816 2491559 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-068218 NodeName:old-k8s-version-068218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:28:00.952023 2491559 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-068218"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:28:00.952113 2491559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:28:00.959652 2491559 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:28:00.959715 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:28:00.966821 2491559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 09:28:00.979481 2491559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:28:00.991664 2491559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 09:28:01.004405 2491559 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:28:01.009154 2491559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:28:01.018470 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:01.141461 2491559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:28:01.158588 2491559 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218 for IP: 192.168.85.2
	I1101 09:28:01.158662 2491559 certs.go:195] generating shared ca certs ...
	I1101 09:28:01.158693 2491559 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:01.158878 2491559 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:28:01.158976 2491559 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:28:01.159005 2491559 certs.go:257] generating profile certs ...
	I1101 09:28:01.159160 2491559 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.key
	I1101 09:28:01.159278 2491559 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c
	I1101 09:28:01.159372 2491559 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key
	I1101 09:28:01.159538 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:28:01.159605 2491559 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:28:01.159630 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:28:01.159687 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:28:01.159748 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:28:01.159802 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:28:01.159938 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:28:01.160843 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:28:01.184435 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:28:01.204423 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:28:01.223921 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:28:01.244953 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:28:01.269713 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:28:01.291234 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:28:01.311945 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:28:01.343275 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:28:01.367338 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:28:01.390458 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:28:01.411414 2491559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:28:01.436905 2491559 ssh_runner.go:195] Run: openssl version
	I1101 09:28:01.444314 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:28:01.457253 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.462836 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.462979 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.530638 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:28:01.540625 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:28:01.549985 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.554389 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.554458 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.600580 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:28:01.612149 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:28:01.621589 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.627883 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.628025 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.679479 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:28:01.687727 2491559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:28:01.691803 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:28:01.735006 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:28:01.777726 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:28:01.830677 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:28:01.884146 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:28:01.953537 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:28:02.044328 2491559 kubeadm.go:401] StartCluster: {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:28:02.044488 2491559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:28:02.044599 2491559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:28:02.108684 2491559 cri.go:89] found id: "fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5"
	I1101 09:28:02.108748 2491559 cri.go:89] found id: "067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8"
	I1101 09:28:02.108768 2491559 cri.go:89] found id: "74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749"
	I1101 09:28:02.108791 2491559 cri.go:89] found id: "7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4"
	I1101 09:28:02.108809 2491559 cri.go:89] found id: ""
	I1101 09:28:02.108894 2491559 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:28:02.134503 2491559 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:28:02.134639 2491559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:28:02.150523 2491559 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:28:02.150582 2491559 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:28:02.150661 2491559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:28:02.161955 2491559 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:28:02.162574 2491559 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-068218" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:28:02.162881 2491559 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-068218" cluster setting kubeconfig missing "old-k8s-version-068218" context setting]
	I1101 09:28:02.163394 2491559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.165314 2491559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:28:02.183774 2491559 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:28:02.183879 2491559 kubeadm.go:602] duration metric: took 33.245732ms to restartPrimaryControlPlane
	I1101 09:28:02.183907 2491559 kubeadm.go:403] duration metric: took 139.590857ms to StartCluster
	I1101 09:28:02.183936 2491559 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.184016 2491559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:28:02.185003 2491559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.185244 2491559 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:28:02.185607 2491559 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:28:02.185678 2491559 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-068218"
	I1101 09:28:02.185693 2491559 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-068218"
	W1101 09:28:02.185700 2491559 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:28:02.185720 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.186481 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.186894 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:28:02.186972 2491559 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-068218"
	I1101 09:28:02.187011 2491559 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-068218"
	I1101 09:28:02.187312 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.187517 2491559 addons.go:70] Setting dashboard=true in profile "old-k8s-version-068218"
	I1101 09:28:02.187552 2491559 addons.go:239] Setting addon dashboard=true in "old-k8s-version-068218"
	W1101 09:28:02.187571 2491559 addons.go:248] addon dashboard should already be in state true
	I1101 09:28:02.187645 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.188094 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.190333 2491559 out.go:179] * Verifying Kubernetes components...
	I1101 09:28:02.198326 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:02.240730 2491559 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:28:02.246074 2491559 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:28:02.246098 2491559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:28:02.246170 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.256257 2491559 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-068218"
	W1101 09:28:02.256284 2491559 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:28:02.256310 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.256737 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.284945 2491559 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:28:02.288575 2491559 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:28:02.291978 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:28:02.292013 2491559 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:28:02.292087 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.301839 2491559 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:28:02.301860 2491559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:28:02.301934 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.312829 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.356591 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.359528 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.584387 2491559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:28:02.596279 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:28:02.632837 2491559 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:28:02.694637 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:28:02.694659 2491559 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:28:02.718767 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:28:02.747626 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:28:02.747650 2491559 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:28:02.843526 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:28:02.843597 2491559 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:28:02.928475 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:28:02.928556 2491559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:28:02.965144 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:28:02.965215 2491559 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:28:03.002493 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:28:03.002586 2491559 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:28:03.028328 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:28:03.028402 2491559 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:28:03.052642 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:28:03.052711 2491559 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:28:03.068211 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:28:03.068282 2491559 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:28:03.090834 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:28:06.958751 2491559 node_ready.go:49] node "old-k8s-version-068218" is "Ready"
	I1101 09:28:06.958779 2491559 node_ready.go:38] duration metric: took 4.325863033s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:28:06.958795 2491559 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:28:06.958855 2491559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:28:08.776895 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.058093891s)
	I1101 09:28:08.777185 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.18083436s)
	I1101 09:28:09.326948 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.236075616s)
	I1101 09:28:09.326997 2491559 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.368126352s)
	I1101 09:28:09.327215 2491559 api_server.go:72] duration metric: took 7.141911391s to wait for apiserver process to appear ...
	I1101 09:28:09.327225 2491559 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:28:09.327242 2491559 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:28:09.330002 2491559 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-068218 addons enable metrics-server
	
	I1101 09:28:09.333074 2491559 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 09:28:09.335909 2491559 addons.go:515] duration metric: took 7.150288528s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 09:28:09.336865 2491559 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:28:09.338255 2491559 api_server.go:141] control plane version: v1.28.0
	I1101 09:28:09.338277 2491559 api_server.go:131] duration metric: took 11.046003ms to wait for apiserver health ...
	I1101 09:28:09.338286 2491559 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:28:09.344282 2491559 system_pods.go:59] 8 kube-system pods found
	I1101 09:28:09.344321 2491559 system_pods.go:61] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:28:09.344331 2491559 system_pods.go:61] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:28:09.344337 2491559 system_pods.go:61] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:28:09.344344 2491559 system_pods.go:61] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:28:09.344351 2491559 system_pods.go:61] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:28:09.344364 2491559 system_pods.go:61] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:28:09.344372 2491559 system_pods.go:61] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:28:09.344379 2491559 system_pods.go:61] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Running
	I1101 09:28:09.344385 2491559 system_pods.go:74] duration metric: took 6.09451ms to wait for pod list to return data ...
	I1101 09:28:09.344397 2491559 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:28:09.347290 2491559 default_sa.go:45] found service account: "default"
	I1101 09:28:09.347324 2491559 default_sa.go:55] duration metric: took 2.920575ms for default service account to be created ...
	I1101 09:28:09.347333 2491559 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:28:09.350708 2491559 system_pods.go:86] 8 kube-system pods found
	I1101 09:28:09.350737 2491559 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:28:09.350768 2491559 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:28:09.350783 2491559 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:28:09.350799 2491559 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:28:09.350806 2491559 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:28:09.350820 2491559 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:28:09.350842 2491559 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:28:09.350860 2491559 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Running
	I1101 09:28:09.350868 2491559 system_pods.go:126] duration metric: took 3.528927ms to wait for k8s-apps to be running ...
	I1101 09:28:09.350889 2491559 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:28:09.350972 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:09.378121 2491559 system_svc.go:56] duration metric: took 27.223688ms WaitForService to wait for kubelet
	I1101 09:28:09.378159 2491559 kubeadm.go:587] duration metric: took 7.192864446s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:09.378178 2491559 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:28:09.381476 2491559 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:28:09.381508 2491559 node_conditions.go:123] node cpu capacity is 2
	I1101 09:28:09.381520 2491559 node_conditions.go:105] duration metric: took 3.336826ms to run NodePressure ...
	I1101 09:28:09.381557 2491559 start.go:242] waiting for startup goroutines ...
	I1101 09:28:09.381566 2491559 start.go:247] waiting for cluster config update ...
	I1101 09:28:09.381581 2491559 start.go:256] writing updated cluster config ...
	I1101 09:28:09.381871 2491559 ssh_runner.go:195] Run: rm -f paused
	I1101 09:28:09.386181 2491559 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:28:09.390464 2491559 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:28:11.396391 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:13.895838 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:15.896307 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:17.896761 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:19.897322 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:22.396594 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:24.397106 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:26.898327 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:29.395950 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:31.396342 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:33.896523 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:35.896613 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:38.396390 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	I1101 09:28:40.396405 2491559 pod_ready.go:94] pod "coredns-5dd5756b68-b4f66" is "Ready"
	I1101 09:28:40.396432 2491559 pod_ready.go:86] duration metric: took 31.005940859s for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.399995 2491559 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.407130 2491559 pod_ready.go:94] pod "etcd-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.407154 2491559 pod_ready.go:86] duration metric: took 7.13531ms for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.409857 2491559 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.414425 2491559 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.414446 2491559 pod_ready.go:86] duration metric: took 4.567939ms for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.424339 2491559 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.594415 2491559 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.594445 2491559 pod_ready.go:86] duration metric: took 170.07303ms for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.795051 2491559 pod_ready.go:83] waiting for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.194685 2491559 pod_ready.go:94] pod "kube-proxy-9574h" is "Ready"
	I1101 09:28:41.194717 2491559 pod_ready.go:86] duration metric: took 399.642303ms for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.395614 2491559 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.794511 2491559 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-068218" is "Ready"
	I1101 09:28:41.794538 2491559 pod_ready.go:86] duration metric: took 398.89496ms for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.794554 2491559 pod_ready.go:40] duration metric: took 32.408340873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:28:41.852590 2491559 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 09:28:41.855639 2491559 out.go:203] 
	W1101 09:28:41.858451 2491559 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:28:41.861304 2491559 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:28:41.864115 2491559 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-068218" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.371728971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.389415914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.389990036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.407522521Z" level=info msg="Created container a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper" id=698a0413-f93d-49d7-8269-464c27b0a0bd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.408669574Z" level=info msg="Starting container: a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920" id=8e8ce472-f70a-484d-ac5f-86894c402c9d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.412835366Z" level=info msg="Started container" PID=1659 containerID=a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper id=8e8ce472-f70a-484d-ac5f-86894c402c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89
	Nov 01 09:28:42 old-k8s-version-068218 conmon[1657]: conmon a5fae30fce3491b8f983 <ninfo>: container 1659 exited with status 1
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.564505707Z" level=info msg="Removing container: 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.57731089Z" level=info msg="Error loading conmon cgroup of container 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff: cgroup deleted" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.581856643Z" level=info msg="Removed container 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.113852796Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119077635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119113885Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119140354Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122163679Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122197565Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122221064Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.125267937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.12529873Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.125322122Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128349058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128382526Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128406098Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.131511382Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.13154366Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a5fae30fce349       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   f7d4c0d99538d       dashboard-metrics-scraper-5f989dc9cf-ljb5f       kubernetes-dashboard
	aa7739ac6e46f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   3f8ceffec7dfd       storage-provisioner                              kube-system
	af4e0baac89bb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   447b5163e1b51       kubernetes-dashboard-8694d4445c-cftdr            kubernetes-dashboard
	3762d722428dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   2a92a2ae47813       coredns-5dd5756b68-b4f66                         kube-system
	3cbff79beb5a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   3f8ceffec7dfd       storage-provisioner                              kube-system
	cf694c627ea59       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   73ef8bd5b2270       busybox                                          default
	e915ac6e3880e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   065ab670283e5       kube-proxy-9574h                                 kube-system
	eec2809483898       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   4a9fd70ff377f       kindnet-8ks7s                                    kube-system
	fe733df8bf3e8       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   9e41e854b2c40       kube-scheduler-old-k8s-version-068218            kube-system
	067c804f8e218       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   26d784aba1f15       kube-apiserver-old-k8s-version-068218            kube-system
	74dcaccfb8d03       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   4bddcde50d97b       kube-controller-manager-old-k8s-version-068218   kube-system
	7fb0e5e75636a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   d6b8f1be625e9       etcd-old-k8s-version-068218                      kube-system
	
	
	==> coredns [3762d722428dc59ef53f0455f537bb438e72cf8437c310c1a43dd9b5f7b7fb14] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47499 - 1305 "HINFO IN 219060550359639124.4578615173338036684. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014453154s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-068218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-068218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-068218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_27_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:26:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-068218
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:28:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-068218
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                84351bdd-8654-4943-b8ea-c75bd6268b89
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-b4f66                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-068218                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-8ks7s                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-068218             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-068218    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-9574h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-068218             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ljb5f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cftdr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x9 over 2m4s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-068218 event: Registered Node old-k8s-version-068218 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-068218 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-068218 event: Registered Node old-k8s-version-068218 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4] <==
	{"level":"info","ts":"2025-11-01T09:28:02.241883Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:28:02.241928Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:28:02.242123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:28:02.2424Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:28:02.24248Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:28:02.242904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T09:28:02.243006Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T09:28:02.243126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:28:02.252184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:28:02.253189Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:28:02.292872Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:28:03.172233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.17742Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-068218 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:28:03.177604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:28:03.178597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T09:28:03.178911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:28:03.179767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:28:03.205762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:28:03.205804Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:28:57 up 18:11,  0 user,  load average: 2.13, 3.51, 2.91
	Linux old-k8s-version-068218 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eec280948389885da1b27c55ff4b58fbb0c1a0294e5d4c42be0a4b9d1da3ad5c] <==
	I1101 09:28:07.913871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:28:07.914111       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:28:07.914235       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:28:07.914253       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:28:07.914267       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:28:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:28:08.148754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:28:08.153452       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:28:08.153484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:28:08.153942       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:28:38.109560       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:28:38.149250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:28:38.154737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:28:38.154902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:28:39.753994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:28:39.754026       1 metrics.go:72] Registering metrics
	I1101 09:28:39.754084       1 controller.go:711] "Syncing nftables rules"
	I1101 09:28:48.112691       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:28:48.112842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8] <==
	I1101 09:28:06.784666       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1101 09:28:06.962996       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:28:06.986978       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:28:06.991744       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:28:06.991888       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:28:06.991923       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:28:07.019686       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:28:07.055418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:28:07.074548       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:28:07.074644       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:28:07.075315       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:28:07.075801       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:28:07.081056       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:28:07.107362       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:28:07.791467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:28:09.146172       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:28:09.196748       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:28:09.224417       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:28:09.233573       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:28:09.242550       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:28:09.297541       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.205.2"}
	I1101 09:28:09.319151       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.33.244"}
	I1101 09:28:19.289476       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:28:19.310896       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:28:19.481769       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749] <==
	I1101 09:28:19.390199       1 shared_informer.go:318] Caches are synced for stateful set
	I1101 09:28:19.398700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.494766ms"
	I1101 09:28:19.405341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.06277ms"
	I1101 09:28:19.405430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.03µs"
	I1101 09:28:19.411913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.109863ms"
	I1101 09:28:19.412974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="854.705µs"
	I1101 09:28:19.424670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.963µs"
	I1101 09:28:19.424818       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:28:19.435476       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 09:28:19.448169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.289µs"
	I1101 09:28:19.456000       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 09:28:19.465896       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 09:28:19.489680       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:28:19.849615       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:28:19.879805       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:28:19.879958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:28:24.516721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.569µs"
	I1101 09:28:25.533110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.654µs"
	I1101 09:28:26.537005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.192µs"
	I1101 09:28:29.549556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.660752ms"
	I1101 09:28:29.549742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.971µs"
	I1101 09:28:40.248466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.779461ms"
	I1101 09:28:40.248738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.6µs"
	I1101 09:28:42.580226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.828µs"
	I1101 09:28:49.700328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.763µs"
	
	
	==> kube-proxy [e915ac6e3880e2ad0729af6f8b7d39ad7dac08fd8419522abb00a0450855afa9] <==
	I1101 09:28:08.619080       1 server_others.go:69] "Using iptables proxy"
	I1101 09:28:08.660895       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 09:28:08.715658       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:28:08.717777       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:28:08.717809       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:28:08.717817       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:28:08.717845       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:28:08.718052       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:28:08.718070       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:28:08.719276       1 config.go:188] "Starting service config controller"
	I1101 09:28:08.719286       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:28:08.719303       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:28:08.719309       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:28:08.719701       1 config.go:315] "Starting node config controller"
	I1101 09:28:08.719708       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:28:08.819502       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:28:08.819550       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:28:08.819829       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5] <==
	I1101 09:28:05.553981       1 serving.go:348] Generated self-signed cert in-memory
	I1101 09:28:08.470256       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:28:08.470284       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:28:08.483397       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:28:08.483490       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 09:28:08.483508       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 09:28:08.483527       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:28:08.487086       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:28:08.487116       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:28:08.487188       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:28:08.487199       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:28:08.592290       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:28:08.592309       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 09:28:08.592346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539755     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2f1b12d1-75ce-4b81-b8a2-ac87de146e8c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ljb5f\" (UID: \"2f1b12d1-75ce-4b81-b8a2-ac87de146e8c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539797     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnmfp\" (UniqueName: \"kubernetes.io/projected/846c840d-7045-4409-8bdb-bf9e147f23b8-kube-api-access-qnmfp\") pod \"kubernetes-dashboard-8694d4445c-cftdr\" (UID: \"846c840d-7045-4409-8bdb-bf9e147f23b8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cftdr"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539835     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rl5\" (UniqueName: \"kubernetes.io/projected/2f1b12d1-75ce-4b81-b8a2-ac87de146e8c-kube-api-access-v4rl5\") pod \"dashboard-metrics-scraper-5f989dc9cf-ljb5f\" (UID: \"2f1b12d1-75ce-4b81-b8a2-ac87de146e8c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: W1101 09:28:19.722057     773 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89 WatchSource:0}: Error finding container f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89: Status 404 returned error can't find the container with id f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: W1101 09:28:19.737043     773 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705 WatchSource:0}: Error finding container 447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705: Status 404 returned error can't find the container with id 447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705
	Nov 01 09:28:24 old-k8s-version-068218 kubelet[773]: I1101 09:28:24.503129     773 scope.go:117] "RemoveContainer" containerID="d1347ec5c6fb9a5b6f98d371f91f5653f58110826eba2ddb305f6ec53f9f4a26"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: I1101 09:28:25.512156     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: E1101 09:28:25.512446     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: I1101 09:28:25.512784     773 scope.go:117] "RemoveContainer" containerID="d1347ec5c6fb9a5b6f98d371f91f5653f58110826eba2ddb305f6ec53f9f4a26"
	Nov 01 09:28:26 old-k8s-version-068218 kubelet[773]: I1101 09:28:26.518359     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:26 old-k8s-version-068218 kubelet[773]: E1101 09:28:26.518808     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:29 old-k8s-version-068218 kubelet[773]: I1101 09:28:29.686174     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:29 old-k8s-version-068218 kubelet[773]: E1101 09:28:29.686477     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:38 old-k8s-version-068218 kubelet[773]: I1101 09:28:38.542780     773 scope.go:117] "RemoveContainer" containerID="3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2"
	Nov 01 09:28:38 old-k8s-version-068218 kubelet[773]: I1101 09:28:38.577155     773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cftdr" podStartSLOduration=10.761139665 podCreationTimestamp="2025-11-01 09:28:19 +0000 UTC" firstStartedPulling="2025-11-01 09:28:19.739417634 +0000 UTC m=+18.578358969" lastFinishedPulling="2025-11-01 09:28:28.55537586 +0000 UTC m=+27.394317195" observedRunningTime="2025-11-01 09:28:29.536781023 +0000 UTC m=+28.375722350" watchObservedRunningTime="2025-11-01 09:28:38.577097891 +0000 UTC m=+37.416039226"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.367622     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.561713     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.563443     773 scope.go:117] "RemoveContainer" containerID="a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: E1101 09:28:42.563745     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:49 old-k8s-version-068218 kubelet[773]: I1101 09:28:49.685940     773 scope.go:117] "RemoveContainer" containerID="a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	Nov 01 09:28:49 old-k8s-version-068218 kubelet[773]: E1101 09:28:49.686727     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:54 old-k8s-version-068218 kubelet[773]: I1101 09:28:54.172757     773 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [af4e0baac89bbf43554732d2ba200bf33d3c88daff4b532594b9253c2c92686f] <==
	2025/11/01 09:28:28 Using namespace: kubernetes-dashboard
	2025/11/01 09:28:28 Using in-cluster config to connect to apiserver
	2025/11/01 09:28:28 Using secret token for csrf signing
	2025/11/01 09:28:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:28:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:28:28 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:28:28 Generating JWE encryption key
	2025/11/01 09:28:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:28:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:28:28 Initializing JWE encryption key from synchronized object
	2025/11/01 09:28:28 Creating in-cluster Sidecar client
	2025/11/01 09:28:28 Serving insecurely on HTTP port: 9090
	2025/11/01 09:28:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:28:28 Starting overwatch
	
	
	==> storage-provisioner [3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2] <==
	I1101 09:28:07.934468       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:28:38.007354       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa7739ac6e46f17ba37552d3aad001e0f45adf530a865a141ec3e994a46cee75] <==
	I1101 09:28:38.590161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:28:38.604255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:28:38.604304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:28:56.003442       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:28:56.005703       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ce5421b-f133-4a3c-9fef-747d273e5cf2", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92 became leader
	I1101 09:28:56.005762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92!
	I1101 09:28:56.106237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-068218 -n old-k8s-version-068218: exit status 2 (378.000803ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-068218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-068218
helpers_test.go:243: (dbg) docker inspect old-k8s-version-068218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	        "Created": "2025-11-01T09:26:34.668923657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2491686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:27:53.905932784Z",
	            "FinishedAt": "2025-11-01T09:27:53.092987369Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/hosts",
	        "LogPath": "/var/lib/docker/containers/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4-json.log",
	        "Name": "/old-k8s-version-068218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-068218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-068218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4",
	                "LowerDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/488e76226c62f13342a618b323cabf4fd578df8c302831cd955bc8b2c518c74e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-068218",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-068218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-068218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-068218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cbb920d089d4bef3b517ef1aad6863dcf9b559b5e3fb163268e6477284529fb3",
	            "SandboxKey": "/var/run/docker/netns/cbb920d089d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36341"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36344"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36342"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36343"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-068218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:4c:5c:92:74:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e195285262e64d3b782d7abf538ceec14d34fc8c1e31d12d18b21428d3b9ea34",
	                    "EndpointID": "f5bc9f461c409f14623bd01c82157352db4928b20d9c9d7ceb4ca13c5a5de4b3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-068218",
	                        "e88ec4f29f18"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218: exit status 2 (355.802527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-068218 logs -n 25: (1.362214757s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-206273 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo containerd config dump                                                                                                                                                                                                  │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ ssh     │ -p cilium-206273 sudo crio config                                                                                                                                                                                                             │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p cilium-206273                                                                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p force-systemd-env-778652                                                                                                                                                                                                                   │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ delete  │ -p pause-951206                                                                                                                                                                                                                               │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ cert-options-578478 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:27:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:27:53.625517 2491559 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:27:53.625647 2491559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:53.625658 2491559 out.go:374] Setting ErrFile to fd 2...
	I1101 09:27:53.625663 2491559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:53.625921 2491559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:27:53.626442 2491559 out.go:368] Setting JSON to false
	I1101 09:27:53.627391 2491559 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65420,"bootTime":1761923854,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:27:53.627455 2491559 start.go:143] virtualization:  
	I1101 09:27:53.630368 2491559 out.go:179] * [old-k8s-version-068218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:27:53.634231 2491559 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:27:53.634379 2491559 notify.go:221] Checking for updates...
	I1101 09:27:53.640032 2491559 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:27:53.643011 2491559 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:27:53.646002 2491559 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:27:53.648890 2491559 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:27:53.651839 2491559 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:27:53.655535 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:27:53.659041 2491559 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:27:53.661876 2491559 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:27:53.695700 2491559 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:27:53.695955 2491559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:27:53.749502 2491559 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:27:53.740615043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:27:53.749608 2491559 docker.go:319] overlay module found
	I1101 09:27:53.752734 2491559 out.go:179] * Using the docker driver based on existing profile
	I1101 09:27:53.755591 2491559 start.go:309] selected driver: docker
	I1101 09:27:53.755610 2491559 start.go:930] validating driver "docker" against &{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:53.755717 2491559 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:27:53.756467 2491559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:27:53.813331 2491559 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:27:53.804648909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:27:53.813672 2491559 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:27:53.813706 2491559 cni.go:84] Creating CNI manager for ""
	I1101 09:27:53.813758 2491559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:27:53.813800 2491559 start.go:353] cluster config:
	{Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:53.817074 2491559 out.go:179] * Starting "old-k8s-version-068218" primary control-plane node in "old-k8s-version-068218" cluster
	I1101 09:27:53.820028 2491559 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:27:53.822994 2491559 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:27:53.825844 2491559 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:27:53.825922 2491559 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 09:27:53.825923 2491559 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:27:53.825936 2491559 cache.go:59] Caching tarball of preloaded images
	I1101 09:27:53.826094 2491559 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:27:53.826108 2491559 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:27:53.826283 2491559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:27:53.853749 2491559 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:27:53.853768 2491559 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:27:53.853788 2491559 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:27:53.853821 2491559 start.go:360] acquireMachinesLock for old-k8s-version-068218: {Name:mkfc282fcc0d94abffeef2a346c8ebfcf87a3759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:27:53.853881 2491559 start.go:364] duration metric: took 42.649µs to acquireMachinesLock for "old-k8s-version-068218"
	I1101 09:27:53.853901 2491559 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:27:53.853906 2491559 fix.go:54] fixHost starting: 
	I1101 09:27:53.854165 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:53.871456 2491559 fix.go:112] recreateIfNeeded on old-k8s-version-068218: state=Stopped err=<nil>
	W1101 09:27:53.871484 2491559 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:27:53.874677 2491559 out.go:252] * Restarting existing docker container for "old-k8s-version-068218" ...
	I1101 09:27:53.874767 2491559 cli_runner.go:164] Run: docker start old-k8s-version-068218
	I1101 09:27:54.127124 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:27:54.146460 2491559 kic.go:430] container "old-k8s-version-068218" state is running.
	I1101 09:27:54.147002 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:54.171491 2491559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/config.json ...
	I1101 09:27:54.171869 2491559 machine.go:94] provisionDockerMachine start ...
	I1101 09:27:54.171945 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:54.195097 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:54.195408 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:54.195416 2491559 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:27:54.196107 2491559 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36808->127.0.0.1:36340: read: connection reset by peer
	I1101 09:27:57.347340 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:27:57.347362 2491559 ubuntu.go:182] provisioning hostname "old-k8s-version-068218"
	I1101 09:27:57.347429 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:57.365421 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:57.365737 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:57.365759 2491559 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-068218 && echo "old-k8s-version-068218" | sudo tee /etc/hostname
	I1101 09:27:57.530134 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-068218
	
	I1101 09:27:57.530231 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:57.548953 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:57.549274 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:57.549332 2491559 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-068218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-068218/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-068218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:27:57.700021 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:27:57.700051 2491559 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:27:57.700079 2491559 ubuntu.go:190] setting up certificates
	I1101 09:27:57.700089 2491559 provision.go:84] configureAuth start
	I1101 09:27:57.700148 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:57.718939 2491559 provision.go:143] copyHostCerts
	I1101 09:27:57.719009 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:27:57.719031 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:27:57.719111 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:27:57.719220 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:27:57.719231 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:27:57.719258 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:27:57.719312 2491559 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:27:57.719320 2491559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:27:57.719343 2491559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:27:57.719393 2491559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-068218 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-068218]
	I1101 09:27:58.354793 2491559 provision.go:177] copyRemoteCerts
	I1101 09:27:58.354859 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:27:58.354905 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.372424 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:58.479284 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:27:58.496836 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:27:58.514966 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:27:58.531685 2491559 provision.go:87] duration metric: took 831.571466ms to configureAuth
	I1101 09:27:58.531709 2491559 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:27:58.531937 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:27:58.532044 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.549182 2491559 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:58.549572 2491559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36340 <nil> <nil>}
	I1101 09:27:58.549589 2491559 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:27:58.872239 2491559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:27:58.872258 2491559 machine.go:97] duration metric: took 4.700375122s to provisionDockerMachine
	I1101 09:27:58.872269 2491559 start.go:293] postStartSetup for "old-k8s-version-068218" (driver="docker")
	I1101 09:27:58.872279 2491559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:27:58.872335 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:27:58.872403 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:58.892562 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:58.995539 2491559 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:27:58.998729 2491559 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:27:58.998756 2491559 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:27:58.998767 2491559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:27:58.998818 2491559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:27:58.998932 2491559 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:27:58.999040 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:27:59.007035 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:27:59.026617 2491559 start.go:296] duration metric: took 154.332987ms for postStartSetup
	I1101 09:27:59.026745 2491559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:27:59.026790 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.048855 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.148932 2491559 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:27:59.153919 2491559 fix.go:56] duration metric: took 5.300007009s for fixHost
	I1101 09:27:59.153942 2491559 start.go:83] releasing machines lock for "old-k8s-version-068218", held for 5.300052423s
	I1101 09:27:59.154023 2491559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-068218
	I1101 09:27:59.171770 2491559 ssh_runner.go:195] Run: cat /version.json
	I1101 09:27:59.171830 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.171931 2491559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:27:59.171984 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:27:59.190436 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.213046 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:27:59.402539 2491559 ssh_runner.go:195] Run: systemctl --version
	I1101 09:27:59.408890 2491559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:27:59.446353 2491559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:27:59.450718 2491559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:27:59.450791 2491559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:27:59.458824 2491559 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:27:59.458852 2491559 start.go:496] detecting cgroup driver to use...
	I1101 09:27:59.458883 2491559 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:27:59.458940 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:27:59.474085 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:27:59.487016 2491559 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:27:59.487116 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:27:59.503008 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:27:59.516520 2491559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:27:59.627273 2491559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:27:59.742132 2491559 docker.go:234] disabling docker service ...
	I1101 09:27:59.742222 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:27:59.759991 2491559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:27:59.773316 2491559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:27:59.895613 2491559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:28:00.025382 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:28:00.049339 2491559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:28:00.074272 2491559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:28:00.074358 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.099294 2491559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:28:00.099387 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.123787 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.150254 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.177995 2491559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:28:00.204108 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.259076 2491559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.278494 2491559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:00.292127 2491559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:28:00.305512 2491559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:28:00.339980 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:00.550308 2491559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:28:00.697410 2491559 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:28:00.697478 2491559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:28:00.701366 2491559 start.go:564] Will wait 60s for crictl version
	I1101 09:28:00.701432 2491559 ssh_runner.go:195] Run: which crictl
	I1101 09:28:00.704871 2491559 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:28:00.729651 2491559 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:28:00.729752 2491559 ssh_runner.go:195] Run: crio --version
	I1101 09:28:00.760059 2491559 ssh_runner.go:195] Run: crio --version
	I1101 09:28:00.793535 2491559 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 09:28:00.796580 2491559 cli_runner.go:164] Run: docker network inspect old-k8s-version-068218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:28:00.811545 2491559 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:28:00.815468 2491559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:28:00.824845 2491559 kubeadm.go:884] updating cluster {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:28:00.824954 2491559 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:00.825011 2491559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:28:00.859837 2491559 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:28:00.859887 2491559 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:28:00.859943 2491559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:28:00.887443 2491559 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:28:00.887471 2491559 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:28:00.887479 2491559 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1101 09:28:00.887580 2491559 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-068218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:28:00.887663 2491559 ssh_runner.go:195] Run: crio config
	I1101 09:28:00.951649 2491559 cni.go:84] Creating CNI manager for ""
	I1101 09:28:00.951718 2491559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:00.951755 2491559 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:28:00.951816 2491559 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-068218 NodeName:old-k8s-version-068218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:28:00.952023 2491559 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-068218"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:28:00.952113 2491559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:28:00.959652 2491559 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:28:00.959715 2491559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:28:00.966821 2491559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 09:28:00.979481 2491559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:28:00.991664 2491559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1101 09:28:01.004405 2491559 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:28:01.009154 2491559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:28:01.018470 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:01.141461 2491559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:28:01.158588 2491559 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218 for IP: 192.168.85.2
	I1101 09:28:01.158662 2491559 certs.go:195] generating shared ca certs ...
	I1101 09:28:01.158693 2491559 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:01.158878 2491559 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:28:01.158976 2491559 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:28:01.159005 2491559 certs.go:257] generating profile certs ...
	I1101 09:28:01.159160 2491559 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.key
	I1101 09:28:01.159278 2491559 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key.85e8465c
	I1101 09:28:01.159372 2491559 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key
	I1101 09:28:01.159538 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:28:01.159605 2491559 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:28:01.159630 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:28:01.159687 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:28:01.159748 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:28:01.159802 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:28:01.159938 2491559 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:28:01.160843 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:28:01.184435 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:28:01.204423 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:28:01.223921 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:28:01.244953 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:28:01.269713 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:28:01.291234 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:28:01.311945 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:28:01.343275 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:28:01.367338 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:28:01.390458 2491559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:28:01.411414 2491559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:28:01.436905 2491559 ssh_runner.go:195] Run: openssl version
	I1101 09:28:01.444314 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:28:01.457253 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.462836 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.462979 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:28:01.530638 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:28:01.540625 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:28:01.549985 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.554389 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.554458 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:28:01.600580 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:28:01.612149 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:28:01.621589 2491559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.627883 2491559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.628025 2491559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:28:01.679479 2491559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:28:01.687727 2491559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:28:01.691803 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:28:01.735006 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:28:01.777726 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:28:01.830677 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:28:01.884146 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:28:01.953537 2491559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:28:02.044328 2491559 kubeadm.go:401] StartCluster: {Name:old-k8s-version-068218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-068218 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:28:02.044488 2491559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:28:02.044599 2491559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:28:02.108684 2491559 cri.go:89] found id: "fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5"
	I1101 09:28:02.108748 2491559 cri.go:89] found id: "067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8"
	I1101 09:28:02.108768 2491559 cri.go:89] found id: "74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749"
	I1101 09:28:02.108791 2491559 cri.go:89] found id: "7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4"
	I1101 09:28:02.108809 2491559 cri.go:89] found id: ""
	I1101 09:28:02.108894 2491559 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:28:02.134503 2491559 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:28:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:28:02.134639 2491559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:28:02.150523 2491559 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:28:02.150582 2491559 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:28:02.150661 2491559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:28:02.161955 2491559 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:28:02.162574 2491559 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-068218" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:28:02.162881 2491559 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-068218" cluster setting kubeconfig missing "old-k8s-version-068218" context setting]
	I1101 09:28:02.163394 2491559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.165314 2491559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:28:02.183774 2491559 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:28:02.183879 2491559 kubeadm.go:602] duration metric: took 33.245732ms to restartPrimaryControlPlane
	I1101 09:28:02.183907 2491559 kubeadm.go:403] duration metric: took 139.590857ms to StartCluster
	I1101 09:28:02.183936 2491559 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.184016 2491559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:28:02.185003 2491559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:02.185244 2491559 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:28:02.185607 2491559 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:28:02.185678 2491559 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-068218"
	I1101 09:28:02.185693 2491559 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-068218"
	W1101 09:28:02.185700 2491559 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:28:02.185720 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.186481 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.186894 2491559 config.go:182] Loaded profile config "old-k8s-version-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:28:02.186972 2491559 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-068218"
	I1101 09:28:02.187011 2491559 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-068218"
	I1101 09:28:02.187312 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.187517 2491559 addons.go:70] Setting dashboard=true in profile "old-k8s-version-068218"
	I1101 09:28:02.187552 2491559 addons.go:239] Setting addon dashboard=true in "old-k8s-version-068218"
	W1101 09:28:02.187571 2491559 addons.go:248] addon dashboard should already be in state true
	I1101 09:28:02.187645 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.188094 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.190333 2491559 out.go:179] * Verifying Kubernetes components...
	I1101 09:28:02.198326 2491559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:02.240730 2491559 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:28:02.246074 2491559 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:28:02.246098 2491559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:28:02.246170 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.256257 2491559 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-068218"
	W1101 09:28:02.256284 2491559 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:28:02.256310 2491559 host.go:66] Checking if "old-k8s-version-068218" exists ...
	I1101 09:28:02.256737 2491559 cli_runner.go:164] Run: docker container inspect old-k8s-version-068218 --format={{.State.Status}}
	I1101 09:28:02.284945 2491559 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:28:02.288575 2491559 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:28:02.291978 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:28:02.292013 2491559 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:28:02.292087 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.301839 2491559 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:28:02.301860 2491559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:28:02.301934 2491559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-068218
	I1101 09:28:02.312829 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.356591 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.359528 2491559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36340 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/old-k8s-version-068218/id_rsa Username:docker}
	I1101 09:28:02.584387 2491559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:28:02.596279 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:28:02.632837 2491559 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:28:02.694637 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:28:02.694659 2491559 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:28:02.718767 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:28:02.747626 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:28:02.747650 2491559 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:28:02.843526 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:28:02.843597 2491559 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:28:02.928475 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:28:02.928556 2491559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:28:02.965144 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:28:02.965215 2491559 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:28:03.002493 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:28:03.002586 2491559 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:28:03.028328 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:28:03.028402 2491559 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:28:03.052642 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:28:03.052711 2491559 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:28:03.068211 2491559 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:28:03.068282 2491559 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:28:03.090834 2491559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:28:06.958751 2491559 node_ready.go:49] node "old-k8s-version-068218" is "Ready"
	I1101 09:28:06.958779 2491559 node_ready.go:38] duration metric: took 4.325863033s for node "old-k8s-version-068218" to be "Ready" ...
	I1101 09:28:06.958795 2491559 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:28:06.958855 2491559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:28:08.776895 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.058093891s)
	I1101 09:28:08.777185 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.18083436s)
	I1101 09:28:09.326948 2491559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.236075616s)
	I1101 09:28:09.326997 2491559 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.368126352s)
	I1101 09:28:09.327215 2491559 api_server.go:72] duration metric: took 7.141911391s to wait for apiserver process to appear ...
	I1101 09:28:09.327225 2491559 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:28:09.327242 2491559 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:28:09.330002 2491559 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-068218 addons enable metrics-server
	
	I1101 09:28:09.333074 2491559 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 09:28:09.335909 2491559 addons.go:515] duration metric: took 7.150288528s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 09:28:09.336865 2491559 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:28:09.338255 2491559 api_server.go:141] control plane version: v1.28.0
	I1101 09:28:09.338277 2491559 api_server.go:131] duration metric: took 11.046003ms to wait for apiserver health ...
	I1101 09:28:09.338286 2491559 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:28:09.344282 2491559 system_pods.go:59] 8 kube-system pods found
	I1101 09:28:09.344321 2491559 system_pods.go:61] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:28:09.344331 2491559 system_pods.go:61] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:28:09.344337 2491559 system_pods.go:61] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:28:09.344344 2491559 system_pods.go:61] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:28:09.344351 2491559 system_pods.go:61] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:28:09.344364 2491559 system_pods.go:61] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:28:09.344372 2491559 system_pods.go:61] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:28:09.344379 2491559 system_pods.go:61] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Running
	I1101 09:28:09.344385 2491559 system_pods.go:74] duration metric: took 6.09451ms to wait for pod list to return data ...
	I1101 09:28:09.344397 2491559 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:28:09.347290 2491559 default_sa.go:45] found service account: "default"
	I1101 09:28:09.347324 2491559 default_sa.go:55] duration metric: took 2.920575ms for default service account to be created ...
	I1101 09:28:09.347333 2491559 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:28:09.350708 2491559 system_pods.go:86] 8 kube-system pods found
	I1101 09:28:09.350737 2491559 system_pods.go:89] "coredns-5dd5756b68-b4f66" [6758b28d-65e8-4750-8150-214984beb6a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:28:09.350768 2491559 system_pods.go:89] "etcd-old-k8s-version-068218" [97c22198-a6fa-4d82-8ae3-981cf4543c10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:28:09.350783 2491559 system_pods.go:89] "kindnet-8ks7s" [7eeb1ffb-51f8-4229-bf9c-6457fdc0eede] Running
	I1101 09:28:09.350799 2491559 system_pods.go:89] "kube-apiserver-old-k8s-version-068218" [13d7db97-cfab-4362-b3b7-ac0a5aef54fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:28:09.350806 2491559 system_pods.go:89] "kube-controller-manager-old-k8s-version-068218" [b0d936ee-d062-4e6c-9d95-4574d23b71fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:28:09.350820 2491559 system_pods.go:89] "kube-proxy-9574h" [23a5f11d-f074-4c54-a831-2ec6b7220d73] Running
	I1101 09:28:09.350842 2491559 system_pods.go:89] "kube-scheduler-old-k8s-version-068218" [b70eb666-3066-4829-ba12-05475e5c8509] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:28:09.350860 2491559 system_pods.go:89] "storage-provisioner" [2cf435bc-9907-4482-a9ba-eee3b7afe7d2] Running
	I1101 09:28:09.350868 2491559 system_pods.go:126] duration metric: took 3.528927ms to wait for k8s-apps to be running ...
	I1101 09:28:09.350889 2491559 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:28:09.350972 2491559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:09.378121 2491559 system_svc.go:56] duration metric: took 27.223688ms WaitForService to wait for kubelet
	I1101 09:28:09.378159 2491559 kubeadm.go:587] duration metric: took 7.192864446s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:09.378178 2491559 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:28:09.381476 2491559 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:28:09.381508 2491559 node_conditions.go:123] node cpu capacity is 2
	I1101 09:28:09.381520 2491559 node_conditions.go:105] duration metric: took 3.336826ms to run NodePressure ...
	I1101 09:28:09.381557 2491559 start.go:242] waiting for startup goroutines ...
	I1101 09:28:09.381566 2491559 start.go:247] waiting for cluster config update ...
	I1101 09:28:09.381581 2491559 start.go:256] writing updated cluster config ...
	I1101 09:28:09.381871 2491559 ssh_runner.go:195] Run: rm -f paused
	I1101 09:28:09.386181 2491559 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:28:09.390464 2491559 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:28:11.396391 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:13.895838 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:15.896307 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:17.896761 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:19.897322 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:22.396594 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:24.397106 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:26.898327 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:29.395950 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:31.396342 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:33.896523 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:35.896613 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	W1101 09:28:38.396390 2491559 pod_ready.go:104] pod "coredns-5dd5756b68-b4f66" is not "Ready", error: <nil>
	I1101 09:28:40.396405 2491559 pod_ready.go:94] pod "coredns-5dd5756b68-b4f66" is "Ready"
	I1101 09:28:40.396432 2491559 pod_ready.go:86] duration metric: took 31.005940859s for pod "coredns-5dd5756b68-b4f66" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.399995 2491559 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.407130 2491559 pod_ready.go:94] pod "etcd-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.407154 2491559 pod_ready.go:86] duration metric: took 7.13531ms for pod "etcd-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.409857 2491559 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.414425 2491559 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.414446 2491559 pod_ready.go:86] duration metric: took 4.567939ms for pod "kube-apiserver-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.424339 2491559 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.594415 2491559 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-068218" is "Ready"
	I1101 09:28:40.594445 2491559 pod_ready.go:86] duration metric: took 170.07303ms for pod "kube-controller-manager-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:40.795051 2491559 pod_ready.go:83] waiting for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.194685 2491559 pod_ready.go:94] pod "kube-proxy-9574h" is "Ready"
	I1101 09:28:41.194717 2491559 pod_ready.go:86] duration metric: took 399.642303ms for pod "kube-proxy-9574h" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.395614 2491559 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.794511 2491559 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-068218" is "Ready"
	I1101 09:28:41.794538 2491559 pod_ready.go:86] duration metric: took 398.89496ms for pod "kube-scheduler-old-k8s-version-068218" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:28:41.794554 2491559 pod_ready.go:40] duration metric: took 32.408340873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:28:41.852590 2491559 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1101 09:28:41.855639 2491559 out.go:203] 
	W1101 09:28:41.858451 2491559 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:28:41.861304 2491559 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:28:41.864115 2491559 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-068218" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.371728971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.389415914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.389990036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.407522521Z" level=info msg="Created container a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper" id=698a0413-f93d-49d7-8269-464c27b0a0bd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.408669574Z" level=info msg="Starting container: a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920" id=8e8ce472-f70a-484d-ac5f-86894c402c9d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.412835366Z" level=info msg="Started container" PID=1659 containerID=a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper id=8e8ce472-f70a-484d-ac5f-86894c402c9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89
	Nov 01 09:28:42 old-k8s-version-068218 conmon[1657]: conmon a5fae30fce3491b8f983 <ninfo>: container 1659 exited with status 1
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.564505707Z" level=info msg="Removing container: 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.57731089Z" level=info msg="Error loading conmon cgroup of container 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff: cgroup deleted" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:42 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:42.581856643Z" level=info msg="Removed container 03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f/dashboard-metrics-scraper" id=8d7d050d-7d3d-4a84-9f2c-b6230e1b8ab4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.113852796Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119077635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119113885Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.119140354Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122163679Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122197565Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.122221064Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.125267937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.12529873Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.125322122Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128349058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128382526Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.128406098Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.131511382Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:28:48 old-k8s-version-068218 crio[650]: time="2025-11-01T09:28:48.13154366Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a5fae30fce349       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   f7d4c0d99538d       dashboard-metrics-scraper-5f989dc9cf-ljb5f       kubernetes-dashboard
	aa7739ac6e46f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   3f8ceffec7dfd       storage-provisioner                              kube-system
	af4e0baac89bb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   447b5163e1b51       kubernetes-dashboard-8694d4445c-cftdr            kubernetes-dashboard
	3762d722428dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   2a92a2ae47813       coredns-5dd5756b68-b4f66                         kube-system
	3cbff79beb5a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   3f8ceffec7dfd       storage-provisioner                              kube-system
	cf694c627ea59       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   73ef8bd5b2270       busybox                                          default
	e915ac6e3880e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   065ab670283e5       kube-proxy-9574h                                 kube-system
	eec2809483898       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   4a9fd70ff377f       kindnet-8ks7s                                    kube-system
	fe733df8bf3e8       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   9e41e854b2c40       kube-scheduler-old-k8s-version-068218            kube-system
	067c804f8e218       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   26d784aba1f15       kube-apiserver-old-k8s-version-068218            kube-system
	74dcaccfb8d03       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   4bddcde50d97b       kube-controller-manager-old-k8s-version-068218   kube-system
	7fb0e5e75636a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   d6b8f1be625e9       etcd-old-k8s-version-068218                      kube-system
	
	
	==> coredns [3762d722428dc59ef53f0455f537bb438e72cf8437c310c1a43dd9b5f7b7fb14] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47499 - 1305 "HINFO IN 219060550359639124.4578615173338036684. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014453154s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-068218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-068218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-068218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_27_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:26:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-068218
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:28:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:28:37 +0000   Sat, 01 Nov 2025 09:27:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-068218
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                84351bdd-8654-4943-b8ea-c75bd6268b89
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-b4f66                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-068218                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-8ks7s                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-068218             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-068218    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-9574h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-068218             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ljb5f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cftdr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s (x9 over 2m7s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-068218 event: Registered Node old-k8s-version-068218 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-068218 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-068218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-068218 event: Registered Node old-k8s-version-068218 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:03] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:08] overlayfs: idmapped layers are currently not supported
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7fb0e5e75636afbc0298538d44e50df7785d62e2185f396e1c8404fbf222a6e4] <==
	{"level":"info","ts":"2025-11-01T09:28:02.241883Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:28:02.241928Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:28:02.242123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:28:02.2424Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:28:02.24248Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:28:02.242904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-01T09:28:02.243006Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-01T09:28:02.243126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:28:02.252184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:28:02.253189Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:28:02.292872Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-01T09:28:03.172233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-01T09:28:03.172319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.172347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-01T09:28:03.17742Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-068218 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:28:03.177604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:28:03.178597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-01T09:28:03.178911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:28:03.179767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:28:03.205762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:28:03.205804Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:28:59 up 18:11,  0 user,  load average: 2.13, 3.51, 2.91
	Linux old-k8s-version-068218 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eec280948389885da1b27c55ff4b58fbb0c1a0294e5d4c42be0a4b9d1da3ad5c] <==
	I1101 09:28:07.913871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:28:07.914111       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:28:07.914235       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:28:07.914253       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:28:07.914267       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:28:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:28:08.148754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:28:08.153452       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:28:08.153484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:28:08.153942       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:28:38.109560       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:28:38.149250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:28:38.154737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:28:38.154902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:28:39.753994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:28:39.754026       1 metrics.go:72] Registering metrics
	I1101 09:28:39.754084       1 controller.go:711] "Syncing nftables rules"
	I1101 09:28:48.112691       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:28:48.112842       1 main.go:301] handling current node
	I1101 09:28:58.116022       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:28:58.116059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [067c804f8e21876fb45f3c152802ae3d319e8a7ba1a0ed58c096fa2d93f176f8] <==
	I1101 09:28:06.784666       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1101 09:28:06.962996       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:28:06.986978       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:28:06.991744       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:28:06.991888       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:28:06.991923       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:28:07.019686       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:28:07.055418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:28:07.074548       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:28:07.074644       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:28:07.075315       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:28:07.075801       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:28:07.081056       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:28:07.107362       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:28:07.791467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:28:09.146172       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:28:09.196748       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:28:09.224417       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:28:09.233573       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:28:09.242550       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:28:09.297541       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.205.2"}
	I1101 09:28:09.319151       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.33.244"}
	I1101 09:28:19.289476       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:28:19.310896       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:28:19.481769       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [74dcaccfb8d03e88ec7bc0d5f860e724acc1ef7e6b6647ac057b5ec4884a4749] <==
	I1101 09:28:19.390199       1 shared_informer.go:318] Caches are synced for stateful set
	I1101 09:28:19.398700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.494766ms"
	I1101 09:28:19.405341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.06277ms"
	I1101 09:28:19.405430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.03µs"
	I1101 09:28:19.411913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.109863ms"
	I1101 09:28:19.412974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="854.705µs"
	I1101 09:28:19.424670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.963µs"
	I1101 09:28:19.424818       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:28:19.435476       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 09:28:19.448169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.289µs"
	I1101 09:28:19.456000       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 09:28:19.465896       1 shared_informer.go:318] Caches are synced for endpoint
	I1101 09:28:19.489680       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:28:19.849615       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:28:19.879805       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:28:19.879958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:28:24.516721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.569µs"
	I1101 09:28:25.533110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.654µs"
	I1101 09:28:26.537005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.192µs"
	I1101 09:28:29.549556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.660752ms"
	I1101 09:28:29.549742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.971µs"
	I1101 09:28:40.248466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.779461ms"
	I1101 09:28:40.248738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.6µs"
	I1101 09:28:42.580226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.828µs"
	I1101 09:28:49.700328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.763µs"
	
	
	==> kube-proxy [e915ac6e3880e2ad0729af6f8b7d39ad7dac08fd8419522abb00a0450855afa9] <==
	I1101 09:28:08.619080       1 server_others.go:69] "Using iptables proxy"
	I1101 09:28:08.660895       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1101 09:28:08.715658       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:28:08.717777       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:28:08.717809       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:28:08.717817       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:28:08.717845       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:28:08.718052       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:28:08.718070       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:28:08.719276       1 config.go:188] "Starting service config controller"
	I1101 09:28:08.719286       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:28:08.719303       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:28:08.719309       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:28:08.719701       1 config.go:315] "Starting node config controller"
	I1101 09:28:08.719708       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:28:08.819502       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:28:08.819550       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:28:08.819829       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fe733df8bf3e845ffe6b6dedb1032f3540ea13212061a9c8d745c49a950708c5] <==
	I1101 09:28:05.553981       1 serving.go:348] Generated self-signed cert in-memory
	I1101 09:28:08.470256       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:28:08.470284       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:28:08.483397       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:28:08.483490       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 09:28:08.483508       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 09:28:08.483527       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:28:08.487086       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:28:08.487116       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:28:08.487188       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:28:08.487199       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:28:08.592290       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:28:08.592309       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 09:28:08.592346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539755     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2f1b12d1-75ce-4b81-b8a2-ac87de146e8c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ljb5f\" (UID: \"2f1b12d1-75ce-4b81-b8a2-ac87de146e8c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539797     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnmfp\" (UniqueName: \"kubernetes.io/projected/846c840d-7045-4409-8bdb-bf9e147f23b8-kube-api-access-qnmfp\") pod \"kubernetes-dashboard-8694d4445c-cftdr\" (UID: \"846c840d-7045-4409-8bdb-bf9e147f23b8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cftdr"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: I1101 09:28:19.539835     773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rl5\" (UniqueName: \"kubernetes.io/projected/2f1b12d1-75ce-4b81-b8a2-ac87de146e8c-kube-api-access-v4rl5\") pod \"dashboard-metrics-scraper-5f989dc9cf-ljb5f\" (UID: \"2f1b12d1-75ce-4b81-b8a2-ac87de146e8c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f"
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: W1101 09:28:19.722057     773 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89 WatchSource:0}: Error finding container f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89: Status 404 returned error can't find the container with id f7d4c0d99538d37c2aee50def677ba733624a3bc0372a301137746f7a6820f89
	Nov 01 09:28:19 old-k8s-version-068218 kubelet[773]: W1101 09:28:19.737043     773 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e88ec4f29f189ceff4fe4bdf474ad9f9e0ae1e6116ca92110016a09e33532bf4/crio-447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705 WatchSource:0}: Error finding container 447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705: Status 404 returned error can't find the container with id 447b5163e1b515ad69007a050b1fd4eefd1e71087322ada7c3c52fa6fde14705
	Nov 01 09:28:24 old-k8s-version-068218 kubelet[773]: I1101 09:28:24.503129     773 scope.go:117] "RemoveContainer" containerID="d1347ec5c6fb9a5b6f98d371f91f5653f58110826eba2ddb305f6ec53f9f4a26"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: I1101 09:28:25.512156     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: E1101 09:28:25.512446     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:25 old-k8s-version-068218 kubelet[773]: I1101 09:28:25.512784     773 scope.go:117] "RemoveContainer" containerID="d1347ec5c6fb9a5b6f98d371f91f5653f58110826eba2ddb305f6ec53f9f4a26"
	Nov 01 09:28:26 old-k8s-version-068218 kubelet[773]: I1101 09:28:26.518359     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:26 old-k8s-version-068218 kubelet[773]: E1101 09:28:26.518808     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:29 old-k8s-version-068218 kubelet[773]: I1101 09:28:29.686174     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:29 old-k8s-version-068218 kubelet[773]: E1101 09:28:29.686477     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:38 old-k8s-version-068218 kubelet[773]: I1101 09:28:38.542780     773 scope.go:117] "RemoveContainer" containerID="3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2"
	Nov 01 09:28:38 old-k8s-version-068218 kubelet[773]: I1101 09:28:38.577155     773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cftdr" podStartSLOduration=10.761139665 podCreationTimestamp="2025-11-01 09:28:19 +0000 UTC" firstStartedPulling="2025-11-01 09:28:19.739417634 +0000 UTC m=+18.578358969" lastFinishedPulling="2025-11-01 09:28:28.55537586 +0000 UTC m=+27.394317195" observedRunningTime="2025-11-01 09:28:29.536781023 +0000 UTC m=+28.375722350" watchObservedRunningTime="2025-11-01 09:28:38.577097891 +0000 UTC m=+37.416039226"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.367622     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.561713     773 scope.go:117] "RemoveContainer" containerID="03c81ded95b883b274bc6dbbd9ef03122d76502dde5956dda1e74c3c1b42f6ff"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: I1101 09:28:42.563443     773 scope.go:117] "RemoveContainer" containerID="a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	Nov 01 09:28:42 old-k8s-version-068218 kubelet[773]: E1101 09:28:42.563745     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:49 old-k8s-version-068218 kubelet[773]: I1101 09:28:49.685940     773 scope.go:117] "RemoveContainer" containerID="a5fae30fce3491b8f98375ff9f0a4ceabfd362edca9834cecb6515441319d920"
	Nov 01 09:28:49 old-k8s-version-068218 kubelet[773]: E1101 09:28:49.686727     773 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ljb5f_kubernetes-dashboard(2f1b12d1-75ce-4b81-b8a2-ac87de146e8c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ljb5f" podUID="2f1b12d1-75ce-4b81-b8a2-ac87de146e8c"
	Nov 01 09:28:54 old-k8s-version-068218 kubelet[773]: I1101 09:28:54.172757     773 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:28:54 old-k8s-version-068218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [af4e0baac89bbf43554732d2ba200bf33d3c88daff4b532594b9253c2c92686f] <==
	2025/11/01 09:28:28 Using namespace: kubernetes-dashboard
	2025/11/01 09:28:28 Using in-cluster config to connect to apiserver
	2025/11/01 09:28:28 Using secret token for csrf signing
	2025/11/01 09:28:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:28:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:28:28 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:28:28 Generating JWE encryption key
	2025/11/01 09:28:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:28:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:28:28 Initializing JWE encryption key from synchronized object
	2025/11/01 09:28:28 Creating in-cluster Sidecar client
	2025/11/01 09:28:28 Serving insecurely on HTTP port: 9090
	2025/11/01 09:28:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:28:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:28:28 Starting overwatch
	
	
	==> storage-provisioner [3cbff79beb5a9432964e10a6930c81e374df801ea1c933508cf2b39f6c5c86b2] <==
	I1101 09:28:07.934468       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:28:38.007354       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa7739ac6e46f17ba37552d3aad001e0f45adf530a865a141ec3e994a46cee75] <==
	I1101 09:28:38.590161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:28:38.604255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:28:38.604304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:28:56.003442       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:28:56.005703       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ce5421b-f133-4a3c-9fef-747d273e5cf2", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92 became leader
	I1101 09:28:56.005762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92!
	I1101 09:28:56.106237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-068218_b4864570-fe9a-4d4b-a94d-80dc1869ea92!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-068218 -n old-k8s-version-068218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-068218 -n old-k8s-version-068218: exit status 2 (554.210549ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-068218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (307.713653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:30:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-357229 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-357229 describe deploy/metrics-server -n kube-system: exit status 1 (78.481419ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-357229 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-357229
helpers_test.go:243: (dbg) docker inspect no-preload-357229:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	        "Created": "2025-11-01T09:29:04.610428393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2495421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:29:04.680533892Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hosts",
	        "LogPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9-json.log",
	        "Name": "/no-preload-357229",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-357229:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-357229",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	                "LowerDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-357229",
	                "Source": "/var/lib/docker/volumes/no-preload-357229/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-357229",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-357229",
	                "name.minikube.sigs.k8s.io": "no-preload-357229",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6cf39e0dc13643ec0583e7242c081b7470c4a267748381e8db75be877c6b73ff",
	            "SandboxKey": "/var/run/docker/netns/6cf39e0dc136",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36346"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36349"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36347"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36348"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-357229": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:7a:9c:07:0b:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c399d9cfbf1bf49ecabecfc0553884dd8ceaaa3ff2f3c1310f3dc120db9b811",
	                    "EndpointID": "85c042978fd349dcd1935464f02954f77f2da33df8ef936d9a5383c18269f7fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-357229",
	                        "6863b4e551e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-357229 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-357229 logs -n 25: (1.789863338s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p cilium-206273                                                                                                                                                                                                                              │ cilium-206273            │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ pause   │ -p pause-951206 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │                     │
	│ delete  │ -p force-systemd-env-778652                                                                                                                                                                                                                   │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ delete  │ -p pause-951206                                                                                                                                                                                                                               │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ cert-options-578478 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:29:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:29:58.615305 2499376 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:29:58.615514 2499376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:29:58.615542 2499376 out.go:374] Setting ErrFile to fd 2...
	I1101 09:29:58.615560 2499376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:29:58.615832 2499376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:29:58.619098 2499376 out.go:368] Setting JSON to false
	I1101 09:29:58.620132 2499376 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65545,"bootTime":1761923854,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:29:58.620226 2499376 start.go:143] virtualization:  
	I1101 09:29:58.624437 2499376 out.go:179] * [embed-certs-312549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:29:58.627587 2499376 notify.go:221] Checking for updates...
	I1101 09:29:58.631083 2499376 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:29:58.634383 2499376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:29:58.638202 2499376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:29:58.641102 2499376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:29:58.644806 2499376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:29:58.646935 2499376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:29:58.650582 2499376 config.go:182] Loaded profile config "no-preload-357229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:58.650676 2499376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:29:58.701579 2499376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:29:58.701775 2499376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:29:58.809518 2499376 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:29:58.797385868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:29:58.809623 2499376 docker.go:319] overlay module found
	I1101 09:29:58.813013 2499376 out.go:179] * Using the docker driver based on user configuration
	I1101 09:29:58.816030 2499376 start.go:309] selected driver: docker
	I1101 09:29:58.816049 2499376 start.go:930] validating driver "docker" against <nil>
	I1101 09:29:58.816064 2499376 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:29:58.816755 2499376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:29:58.951717 2499376 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:29:58.938966192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:29:58.951879 2499376 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:29:58.952137 2499376 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:29:58.955339 2499376 out.go:179] * Using Docker driver with root privileges
	I1101 09:29:58.958145 2499376 cni.go:84] Creating CNI manager for ""
	I1101 09:29:58.958221 2499376 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:58.958230 2499376 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:29:58.958315 2499376 start.go:353] cluster config:
	{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:58.963469 2499376 out.go:179] * Starting "embed-certs-312549" primary control-plane node in "embed-certs-312549" cluster
	I1101 09:29:58.966425 2499376 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:29:58.969401 2499376 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:29:58.972348 2499376 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:58.972408 2499376 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:29:58.972418 2499376 cache.go:59] Caching tarball of preloaded images
	I1101 09:29:58.972512 2499376 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:29:58.972522 2499376 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:29:58.972644 2499376 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/config.json ...
	I1101 09:29:58.972664 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/config.json: {Name:mk04e6e3433a1b3c817fe56b22b417fe05aac991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:58.972804 2499376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:29:59.002970 2499376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:29:59.002993 2499376 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:29:59.003007 2499376 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:29:59.003050 2499376 start.go:360] acquireMachinesLock for embed-certs-312549: {Name:mkc891654a695438e19d0a82e76ef43fc02ba964 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:29:59.003166 2499376 start.go:364] duration metric: took 99.23µs to acquireMachinesLock for "embed-certs-312549"
	I1101 09:29:59.003195 2499376 start.go:93] Provisioning new machine with config: &{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:59.003272 2499376 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:29:59.260041 2495115 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.038177762s)
	I1101 09:29:59.260823 2495115 node_ready.go:35] waiting up to 6m0s for node "no-preload-357229" to be "Ready" ...
	I1101 09:29:59.261763 2495115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.217934642s)
	I1101 09:29:59.261781 2495115 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 09:29:59.768218 2495115 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-357229" context rescaled to 1 replicas
	I1101 09:30:00.062816 2495115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.674646809s)
	I1101 09:30:00.062878 2495115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.567610052s)
	I1101 09:30:00.287589 2495115 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:30:00.290669 2495115 addons.go:515] duration metric: took 2.56174116s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1101 09:30:01.264735 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	W1101 09:30:03.265238 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	I1101 09:29:59.006836 2499376 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:29:59.007127 2499376 start.go:159] libmachine.API.Create for "embed-certs-312549" (driver="docker")
	I1101 09:29:59.007158 2499376 client.go:173] LocalClient.Create starting
	I1101 09:29:59.007247 2499376 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:29:59.007282 2499376 main.go:143] libmachine: Decoding PEM data...
	I1101 09:29:59.007295 2499376 main.go:143] libmachine: Parsing certificate...
	I1101 09:29:59.007352 2499376 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:29:59.007373 2499376 main.go:143] libmachine: Decoding PEM data...
	I1101 09:29:59.007383 2499376 main.go:143] libmachine: Parsing certificate...
	I1101 09:29:59.007759 2499376 cli_runner.go:164] Run: docker network inspect embed-certs-312549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:29:59.031142 2499376 cli_runner.go:211] docker network inspect embed-certs-312549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:29:59.031215 2499376 network_create.go:284] running [docker network inspect embed-certs-312549] to gather additional debugging logs...
	I1101 09:29:59.031246 2499376 cli_runner.go:164] Run: docker network inspect embed-certs-312549
	W1101 09:29:59.061246 2499376 cli_runner.go:211] docker network inspect embed-certs-312549 returned with exit code 1
	I1101 09:29:59.061273 2499376 network_create.go:287] error running [docker network inspect embed-certs-312549]: docker network inspect embed-certs-312549: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-312549 not found
	I1101 09:29:59.061285 2499376 network_create.go:289] output of [docker network inspect embed-certs-312549]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-312549 not found
	
	** /stderr **
	I1101 09:29:59.061415 2499376 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:59.093225 2499376 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:29:59.093561 2499376 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:29:59.093932 2499376 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:29:59.094341 2499376 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018fdde0}
	I1101 09:29:59.094357 2499376 network_create.go:124] attempt to create docker network embed-certs-312549 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:29:59.094417 2499376 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-312549 embed-certs-312549
	I1101 09:29:59.201529 2499376 network_create.go:108] docker network embed-certs-312549 192.168.76.0/24 created
	I1101 09:29:59.201557 2499376 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-312549" container
	I1101 09:29:59.201645 2499376 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:29:59.229535 2499376 cli_runner.go:164] Run: docker volume create embed-certs-312549 --label name.minikube.sigs.k8s.io=embed-certs-312549 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:29:59.250891 2499376 oci.go:103] Successfully created a docker volume embed-certs-312549
	I1101 09:29:59.250964 2499376 cli_runner.go:164] Run: docker run --rm --name embed-certs-312549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-312549 --entrypoint /usr/bin/test -v embed-certs-312549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:29:59.937900 2499376 oci.go:107] Successfully prepared a docker volume embed-certs-312549
	I1101 09:29:59.937945 2499376 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:59.937965 2499376 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:29:59.938030 2499376 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-312549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:30:05.763475 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	W1101 09:30:07.763654 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	I1101 09:30:05.399293 2499376 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-312549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.46122112s)
	I1101 09:30:05.399328 2499376 kic.go:203] duration metric: took 5.461358872s to extract preloaded images to volume ...
	W1101 09:30:05.399455 2499376 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:30:05.399568 2499376 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:30:05.454618 2499376 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-312549 --name embed-certs-312549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-312549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-312549 --network embed-certs-312549 --ip 192.168.76.2 --volume embed-certs-312549:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:30:05.752598 2499376 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Running}}
	I1101 09:30:05.778425 2499376 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:30:05.800090 2499376 cli_runner.go:164] Run: docker exec embed-certs-312549 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:30:05.851499 2499376 oci.go:144] the created container "embed-certs-312549" has a running status.
	I1101 09:30:05.851533 2499376 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa...
	I1101 09:30:06.292626 2499376 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:30:06.317645 2499376 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:30:06.340393 2499376 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:30:06.340415 2499376 kic_runner.go:114] Args: [docker exec --privileged embed-certs-312549 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:30:06.399538 2499376 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:30:06.423553 2499376 machine.go:94] provisionDockerMachine start ...
	I1101 09:30:06.423659 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:06.452069 2499376 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:06.452424 2499376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36350 <nil> <nil>}
	I1101 09:30:06.452434 2499376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:30:06.453062 2499376 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40590->127.0.0.1:36350: read: connection reset by peer
	I1101 09:30:09.603351 2499376 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-312549
	
	I1101 09:30:09.603381 2499376 ubuntu.go:182] provisioning hostname "embed-certs-312549"
	I1101 09:30:09.603452 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:09.620639 2499376 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:09.620947 2499376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36350 <nil> <nil>}
	I1101 09:30:09.620976 2499376 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-312549 && echo "embed-certs-312549" | sudo tee /etc/hostname
	I1101 09:30:09.786064 2499376 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-312549
	
	I1101 09:30:09.786170 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:09.804756 2499376 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:09.805064 2499376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36350 <nil> <nil>}
	I1101 09:30:09.805084 2499376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-312549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-312549/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-312549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:30:09.951948 2499376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:30:09.951975 2499376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:30:09.952003 2499376 ubuntu.go:190] setting up certificates
	I1101 09:30:09.952024 2499376 provision.go:84] configureAuth start
	I1101 09:30:09.952086 2499376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-312549
	I1101 09:30:09.974622 2499376 provision.go:143] copyHostCerts
	I1101 09:30:09.974691 2499376 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:30:09.974705 2499376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:30:09.974785 2499376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:30:09.974878 2499376 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:30:09.974887 2499376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:30:09.974914 2499376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:30:09.974970 2499376 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:30:09.974977 2499376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:30:09.975009 2499376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:30:09.975063 2499376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-312549 san=[127.0.0.1 192.168.76.2 embed-certs-312549 localhost minikube]
	I1101 09:30:10.221164 2499376 provision.go:177] copyRemoteCerts
	I1101 09:30:10.221232 2499376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:30:10.221285 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:10.238812 2499376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36350 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:30:10.344134 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:30:10.360895 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:30:10.378198 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:30:10.396775 2499376 provision.go:87] duration metric: took 444.726799ms to configureAuth
	I1101 09:30:10.396810 2499376 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:30:10.396995 2499376 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:10.397108 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:10.415474 2499376 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:10.415784 2499376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36350 <nil> <nil>}
	I1101 09:30:10.415805 2499376 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:30:10.682722 2499376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:30:10.682744 2499376 machine.go:97] duration metric: took 4.259171618s to provisionDockerMachine
	I1101 09:30:10.682754 2499376 client.go:176] duration metric: took 11.675590107s to LocalClient.Create
	I1101 09:30:10.682779 2499376 start.go:167] duration metric: took 11.67564193s to libmachine.API.Create "embed-certs-312549"
	I1101 09:30:10.682801 2499376 start.go:293] postStartSetup for "embed-certs-312549" (driver="docker")
	I1101 09:30:10.682815 2499376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:30:10.682883 2499376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:30:10.682933 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:10.700318 2499376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36350 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:30:10.804056 2499376 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:30:10.807332 2499376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:30:10.807362 2499376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:30:10.807372 2499376 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:30:10.807436 2499376 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:30:10.807525 2499376 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:30:10.807631 2499376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:30:10.817050 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:30:10.836583 2499376 start.go:296] duration metric: took 153.754139ms for postStartSetup
	I1101 09:30:10.836977 2499376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-312549
	I1101 09:30:10.854993 2499376 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/config.json ...
	I1101 09:30:10.855296 2499376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:30:10.855349 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:10.873762 2499376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36350 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:30:10.976709 2499376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:30:10.981306 2499376 start.go:128] duration metric: took 11.978019797s to createHost
	I1101 09:30:10.981331 2499376 start.go:83] releasing machines lock for "embed-certs-312549", held for 11.978156457s
	I1101 09:30:10.981398 2499376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-312549
	I1101 09:30:10.997797 2499376 ssh_runner.go:195] Run: cat /version.json
	I1101 09:30:10.997846 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:10.998076 2499376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:30:10.998131 2499376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:30:11.029404 2499376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36350 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:30:11.040054 2499376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36350 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:30:11.223939 2499376 ssh_runner.go:195] Run: systemctl --version
	I1101 09:30:11.230718 2499376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:30:11.269906 2499376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:30:11.274262 2499376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:30:11.274337 2499376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:30:11.302192 2499376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:30:11.302217 2499376 start.go:496] detecting cgroup driver to use...
	I1101 09:30:11.302247 2499376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:30:11.302298 2499376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:30:11.320491 2499376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:30:11.334303 2499376 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:30:11.334373 2499376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:30:11.352734 2499376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:30:11.372299 2499376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:30:11.510852 2499376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:30:11.648112 2499376 docker.go:234] disabling docker service ...
	I1101 09:30:11.648230 2499376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:30:11.669992 2499376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:30:11.683921 2499376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:30:11.807992 2499376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:30:11.923574 2499376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:30:11.936479 2499376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:30:11.951547 2499376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:30:11.951660 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:11.960931 2499376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:30:11.961046 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:11.969763 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:11.978610 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:11.988617 2499376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:30:11.996989 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:12.006930 2499376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:12.023291 2499376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:12.033459 2499376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:30:12.041849 2499376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:30:12.049287 2499376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:12.164243 2499376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:30:12.301530 2499376 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:30:12.301634 2499376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:30:12.306115 2499376 start.go:564] Will wait 60s for crictl version
	I1101 09:30:12.306198 2499376 ssh_runner.go:195] Run: which crictl
	I1101 09:30:12.310306 2499376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:30:12.335441 2499376 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:30:12.335594 2499376 ssh_runner.go:195] Run: crio --version
	I1101 09:30:12.364151 2499376 ssh_runner.go:195] Run: crio --version
	I1101 09:30:12.397091 2499376 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:30:09.764115 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	W1101 09:30:11.764295 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	I1101 09:30:12.399960 2499376 cli_runner.go:164] Run: docker network inspect embed-certs-312549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:30:12.416430 2499376 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:30:12.420113 2499376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:30:12.430626 2499376 kubeadm.go:884] updating cluster {Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:30:12.430753 2499376 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:12.430811 2499376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:30:12.467040 2499376 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:30:12.467061 2499376 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:30:12.467116 2499376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:30:12.494560 2499376 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:30:12.494634 2499376 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:30:12.494657 2499376 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:30:12.494779 2499376 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-312549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:30:12.494902 2499376 ssh_runner.go:195] Run: crio config
	I1101 09:30:12.547645 2499376 cni.go:84] Creating CNI manager for ""
	I1101 09:30:12.547668 2499376 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:30:12.547687 2499376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:30:12.547725 2499376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-312549 NodeName:embed-certs-312549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:30:12.547890 2499376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-312549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:30:12.547975 2499376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:30:12.555526 2499376 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:30:12.555616 2499376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:30:12.563164 2499376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:30:12.575321 2499376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:30:12.587933 2499376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 09:30:12.600679 2499376 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:30:12.605041 2499376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:30:12.615677 2499376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:12.737389 2499376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:12.753729 2499376 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549 for IP: 192.168.76.2
	I1101 09:30:12.753758 2499376 certs.go:195] generating shared ca certs ...
	I1101 09:30:12.753774 2499376 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:12.753962 2499376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:30:12.754022 2499376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:30:12.754048 2499376 certs.go:257] generating profile certs ...
	I1101 09:30:12.754130 2499376 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.key
	I1101 09:30:12.754148 2499376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.crt with IP's: []
	W1101 09:30:13.764527 2495115 node_ready.go:57] node "no-preload-357229" has "Ready":"False" status (will retry)
	I1101 09:30:14.279747 2495115 node_ready.go:49] node "no-preload-357229" is "Ready"
	I1101 09:30:14.279773 2495115 node_ready.go:38] duration metric: took 15.018934743s for node "no-preload-357229" to be "Ready" ...
	I1101 09:30:14.279787 2495115 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:14.279898 2495115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:14.325900 2495115 api_server.go:72] duration metric: took 16.597278489s to wait for apiserver process to appear ...
	I1101 09:30:14.325972 2495115 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:14.326006 2495115 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:30:14.364881 2495115 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:30:14.366504 2495115 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:14.366530 2495115 api_server.go:131] duration metric: took 40.534622ms to wait for apiserver health ...
	I1101 09:30:14.366540 2495115 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:14.428748 2495115 system_pods.go:59] 8 kube-system pods found
	I1101 09:30:14.428867 2495115 system_pods.go:61] "coredns-66bc5c9577-txw5s" [5c832644-3e2e-4c30-8ca3-39f6885bcb2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:14.428891 2495115 system_pods.go:61] "etcd-no-preload-357229" [a63fcd49-ae1b-43ce-b495-cc8cc64e7fa9] Running
	I1101 09:30:14.428925 2495115 system_pods.go:61] "kindnet-lxlsh" [e0c15626-2ec0-46cd-9e60-5e539d445218] Running
	I1101 09:30:14.428949 2495115 system_pods.go:61] "kube-apiserver-no-preload-357229" [2338f847-b546-4fee-8ef7-b2e93e09276e] Running
	I1101 09:30:14.428981 2495115 system_pods.go:61] "kube-controller-manager-no-preload-357229" [8aa33e99-9e5f-495f-8304-6c7db573fde0] Running
	I1101 09:30:14.429015 2495115 system_pods.go:61] "kube-proxy-2mqtw" [729122cc-da91-48af-9470-0a01890691df] Running
	I1101 09:30:14.429037 2495115 system_pods.go:61] "kube-scheduler-no-preload-357229" [9912cbd9-29f1-4ff7-bbe0-66a5e8b4b4a2] Running
	I1101 09:30:14.429057 2495115 system_pods.go:61] "storage-provisioner" [885f6151-81b6-4759-893a-a719350ab59b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:14.429093 2495115 system_pods.go:74] duration metric: took 62.533668ms to wait for pod list to return data ...
	I1101 09:30:14.429126 2495115 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:14.457673 2495115 default_sa.go:45] found service account: "default"
	I1101 09:30:14.457754 2495115 default_sa.go:55] duration metric: took 28.598674ms for default service account to be created ...
	I1101 09:30:14.457778 2495115 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:14.462714 2495115 system_pods.go:86] 8 kube-system pods found
	I1101 09:30:14.462798 2495115 system_pods.go:89] "coredns-66bc5c9577-txw5s" [5c832644-3e2e-4c30-8ca3-39f6885bcb2b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:14.462821 2495115 system_pods.go:89] "etcd-no-preload-357229" [a63fcd49-ae1b-43ce-b495-cc8cc64e7fa9] Running
	I1101 09:30:14.462843 2495115 system_pods.go:89] "kindnet-lxlsh" [e0c15626-2ec0-46cd-9e60-5e539d445218] Running
	I1101 09:30:14.462882 2495115 system_pods.go:89] "kube-apiserver-no-preload-357229" [2338f847-b546-4fee-8ef7-b2e93e09276e] Running
	I1101 09:30:14.462900 2495115 system_pods.go:89] "kube-controller-manager-no-preload-357229" [8aa33e99-9e5f-495f-8304-6c7db573fde0] Running
	I1101 09:30:14.462918 2495115 system_pods.go:89] "kube-proxy-2mqtw" [729122cc-da91-48af-9470-0a01890691df] Running
	I1101 09:30:14.462951 2495115 system_pods.go:89] "kube-scheduler-no-preload-357229" [9912cbd9-29f1-4ff7-bbe0-66a5e8b4b4a2] Running
	I1101 09:30:14.462977 2495115 system_pods.go:89] "storage-provisioner" [885f6151-81b6-4759-893a-a719350ab59b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:14.463033 2495115 retry.go:31] will retry after 277.633019ms: missing components: kube-dns
	I1101 09:30:14.755913 2495115 system_pods.go:86] 8 kube-system pods found
	I1101 09:30:14.755993 2495115 system_pods.go:89] "coredns-66bc5c9577-txw5s" [5c832644-3e2e-4c30-8ca3-39f6885bcb2b] Running
	I1101 09:30:14.756060 2495115 system_pods.go:89] "etcd-no-preload-357229" [a63fcd49-ae1b-43ce-b495-cc8cc64e7fa9] Running
	I1101 09:30:14.756087 2495115 system_pods.go:89] "kindnet-lxlsh" [e0c15626-2ec0-46cd-9e60-5e539d445218] Running
	I1101 09:30:14.756108 2495115 system_pods.go:89] "kube-apiserver-no-preload-357229" [2338f847-b546-4fee-8ef7-b2e93e09276e] Running
	I1101 09:30:14.756141 2495115 system_pods.go:89] "kube-controller-manager-no-preload-357229" [8aa33e99-9e5f-495f-8304-6c7db573fde0] Running
	I1101 09:30:14.756171 2495115 system_pods.go:89] "kube-proxy-2mqtw" [729122cc-da91-48af-9470-0a01890691df] Running
	I1101 09:30:14.756197 2495115 system_pods.go:89] "kube-scheduler-no-preload-357229" [9912cbd9-29f1-4ff7-bbe0-66a5e8b4b4a2] Running
	I1101 09:30:14.756239 2495115 system_pods.go:89] "storage-provisioner" [885f6151-81b6-4759-893a-a719350ab59b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:14.756270 2495115 system_pods.go:126] duration metric: took 298.473475ms to wait for k8s-apps to be running ...
	I1101 09:30:14.756293 2495115 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:14.756366 2495115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:14.813722 2495115 system_svc.go:56] duration metric: took 57.422506ms WaitForService to wait for kubelet
	I1101 09:30:14.813749 2495115 kubeadm.go:587] duration metric: took 17.085131781s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:14.813787 2495115 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:14.817012 2495115 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:30:14.817041 2495115 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:14.817055 2495115 node_conditions.go:105] duration metric: took 3.261555ms to run NodePressure ...
	I1101 09:30:14.817066 2495115 start.go:242] waiting for startup goroutines ...
	I1101 09:30:14.817074 2495115 start.go:247] waiting for cluster config update ...
	I1101 09:30:14.817089 2495115 start.go:256] writing updated cluster config ...
	I1101 09:30:14.817362 2495115 ssh_runner.go:195] Run: rm -f paused
	I1101 09:30:14.821983 2495115 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:14.852511 2495115 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-txw5s" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.861208 2495115 pod_ready.go:94] pod "coredns-66bc5c9577-txw5s" is "Ready"
	I1101 09:30:14.861234 2495115 pod_ready.go:86] duration metric: took 8.685569ms for pod "coredns-66bc5c9577-txw5s" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.953213 2495115 pod_ready.go:83] waiting for pod "etcd-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.958359 2495115 pod_ready.go:94] pod "etcd-no-preload-357229" is "Ready"
	I1101 09:30:14.958380 2495115 pod_ready.go:86] duration metric: took 5.144374ms for pod "etcd-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.962583 2495115 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.968447 2495115 pod_ready.go:94] pod "kube-apiserver-no-preload-357229" is "Ready"
	I1101 09:30:14.968469 2495115 pod_ready.go:86] duration metric: took 5.864395ms for pod "kube-apiserver-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:14.970881 2495115 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:15.226538 2495115 pod_ready.go:94] pod "kube-controller-manager-no-preload-357229" is "Ready"
	I1101 09:30:15.226562 2495115 pod_ready.go:86] duration metric: took 255.610072ms for pod "kube-controller-manager-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:15.426059 2495115 pod_ready.go:83] waiting for pod "kube-proxy-2mqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:15.825911 2495115 pod_ready.go:94] pod "kube-proxy-2mqtw" is "Ready"
	I1101 09:30:15.825935 2495115 pod_ready.go:86] duration metric: took 399.853893ms for pod "kube-proxy-2mqtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:16.026631 2495115 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:16.425643 2495115 pod_ready.go:94] pod "kube-scheduler-no-preload-357229" is "Ready"
	I1101 09:30:16.425667 2495115 pod_ready.go:86] duration metric: took 399.007426ms for pod "kube-scheduler-no-preload-357229" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:16.425680 2495115 pod_ready.go:40] duration metric: took 1.603668592s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:16.505658 2495115 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:30:16.511953 2495115 out.go:179] * Done! kubectl is now configured to use "no-preload-357229" cluster and "default" namespace by default
	I1101 09:30:13.736614 2499376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.crt ...
	I1101 09:30:13.736650 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.crt: {Name:mk91fa5b5616a6a759069e1a3bcea2c06cfa56a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:13.736854 2499376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.key ...
	I1101 09:30:13.736870 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/client.key: {Name:mk6bd53633406f95da4baf3760364c6055926b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:13.736963 2499376 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key.d30b9046
	I1101 09:30:13.736986 2499376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt.d30b9046 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:30:14.811688 2499376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt.d30b9046 ...
	I1101 09:30:14.811721 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt.d30b9046: {Name:mkd0d41fe4a386ffa63080523658df23da13efb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:14.811948 2499376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key.d30b9046 ...
	I1101 09:30:14.811968 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key.d30b9046: {Name:mk4adb80972e3d44c5e29f6d8ce7ba3e09ae6f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:14.812058 2499376 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt.d30b9046 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt
	I1101 09:30:14.812145 2499376 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key.d30b9046 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key
	I1101 09:30:14.812215 2499376 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.key
	I1101 09:30:14.812234 2499376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.crt with IP's: []
	I1101 09:30:15.909868 2499376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.crt ...
	I1101 09:30:15.909899 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.crt: {Name:mkeb85b471f65a77cdbd3c35fd08698dc2fcd4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:15.910102 2499376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.key ...
	I1101 09:30:15.910118 2499376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.key: {Name:mk55188fd09a122557892edeebd23556a454659e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:15.910318 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:30:15.910360 2499376 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:30:15.910373 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:30:15.910399 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:30:15.910426 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:30:15.910453 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:30:15.910500 2499376 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:30:15.911097 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:30:15.930686 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:30:15.950931 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:30:15.970739 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:30:15.990543 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:30:16.017506 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:30:16.039445 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:30:16.059451 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:30:16.077594 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:30:16.096000 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:30:16.114040 2499376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:30:16.132073 2499376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:30:16.146069 2499376 ssh_runner.go:195] Run: openssl version
	I1101 09:30:16.152562 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:30:16.161074 2499376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:16.164789 2499376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:16.164862 2499376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:16.207592 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:30:16.215985 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:30:16.224247 2499376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:30:16.228586 2499376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:30:16.228668 2499376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:30:16.270288 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:30:16.278511 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:30:16.286637 2499376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:30:16.290208 2499376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:30:16.290272 2499376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:30:16.331287 2499376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:30:16.339351 2499376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:30:16.342755 2499376 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:30:16.342810 2499376 kubeadm.go:401] StartCluster: {Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:16.342934 2499376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:30:16.342991 2499376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:30:16.369888 2499376 cri.go:89] found id: ""
	I1101 09:30:16.370031 2499376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:30:16.378115 2499376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:30:16.385985 2499376 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:30:16.386052 2499376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:30:16.394228 2499376 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:30:16.394250 2499376 kubeadm.go:158] found existing configuration files:
	
	I1101 09:30:16.394321 2499376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:30:16.402305 2499376 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:30:16.402396 2499376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:30:16.410229 2499376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:30:16.418024 2499376 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:30:16.418107 2499376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:30:16.428868 2499376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:30:16.438590 2499376 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:30:16.438655 2499376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:30:16.447375 2499376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:30:16.456548 2499376 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:30:16.456629 2499376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:30:16.464960 2499376 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:30:16.524094 2499376 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:30:16.524215 2499376 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:30:16.587373 2499376 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:30:16.587448 2499376 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:30:16.587485 2499376 kubeadm.go:319] OS: Linux
	I1101 09:30:16.587533 2499376 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:30:16.587583 2499376 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:30:16.587633 2499376 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:30:16.587683 2499376 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:30:16.587734 2499376 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:30:16.587784 2499376 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:30:16.587831 2499376 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:30:16.588157 2499376 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:30:16.588212 2499376 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:30:16.673725 2499376 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:30:16.673839 2499376 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:30:16.673934 2499376 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:30:16.701412 2499376 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:30:16.707114 2499376 out.go:252]   - Generating certificates and keys ...
	I1101 09:30:16.707212 2499376 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:30:16.707281 2499376 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:30:17.667834 2499376 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:30:17.990861 2499376 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:30:18.637507 2499376 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:30:18.928134 2499376 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:30:19.178177 2499376 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:30:19.178340 2499376 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-312549 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:30:19.235775 2499376 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:30:19.237474 2499376 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-312549 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:30:19.892090 2499376 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:30:20.400744 2499376 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:30:20.951566 2499376 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:30:20.951892 2499376 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:30:21.572237 2499376 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:30:22.091154 2499376 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:30:23.031708 2499376 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:30:24.408389 2499376 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:30:24.478904 2499376 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:30:24.479834 2499376 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:30:24.482601 2499376 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 01 09:30:14 no-preload-357229 crio[838]: time="2025-11-01T09:30:14.826043346Z" level=info msg="Created container 932f2eae4531882aa2338d463b5fae58a58c3b91e3b3534b2e44730b781cc48e: kube-system/storage-provisioner/storage-provisioner" id=52834066-73f6-4638-8bd9-575df6c5ad7e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:30:14 no-preload-357229 crio[838]: time="2025-11-01T09:30:14.826807567Z" level=info msg="Starting container: 932f2eae4531882aa2338d463b5fae58a58c3b91e3b3534b2e44730b781cc48e" id=e465551f-cfb6-436d-bfa0-a9ac2b8dbdf0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:30:14 no-preload-357229 crio[838]: time="2025-11-01T09:30:14.832672487Z" level=info msg="Started container" PID=2514 containerID=932f2eae4531882aa2338d463b5fae58a58c3b91e3b3534b2e44730b781cc48e description=kube-system/storage-provisioner/storage-provisioner id=e465551f-cfb6-436d-bfa0-a9ac2b8dbdf0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7aca7cd1754ccf1622885aad3d522bab660e110eb059558526db5562226838b2
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.131069433Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cb21c827-7acd-4d8c-9717-80c57f7faad6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.131139954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.137112547Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05 UID:b87372db-ac84-42f2-8d5e-f821c34ca391 NetNS:/var/run/netns/b6ec3330-e154-4041-ab1b-fad86c8651e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400283e858}] Aliases:map[]}"
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.137286499Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.145828925Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05 UID:b87372db-ac84-42f2-8d5e-f821c34ca391 NetNS:/var/run/netns/b6ec3330-e154-4041-ab1b-fad86c8651e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400283e858}] Aliases:map[]}"
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.146122054Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.150089369Z" level=info msg="Ran pod sandbox 3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05 with infra container: default/busybox/POD" id=cb21c827-7acd-4d8c-9717-80c57f7faad6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.151231664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b9624b2-aff7-4f62-b8b5-5820169a6fb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.151451489Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9b9624b2-aff7-4f62-b8b5-5820169a6fb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.151559023Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9b9624b2-aff7-4f62-b8b5-5820169a6fb1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.15245974Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d162b51b-d0eb-4388-a27b-881083b57456 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:30:17 no-preload-357229 crio[838]: time="2025-11-01T09:30:17.164384152Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.308953878Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d162b51b-d0eb-4388-a27b-881083b57456 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.309622422Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3393bc81-b93d-4217-903f-38523f451da3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.313709061Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=27d019bc-cb1a-4116-8b5f-95a1a934ed1c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.320816056Z" level=info msg="Creating container: default/busybox/busybox" id=e4ce9eef-774b-4e67-bc56-ad37b90c3ebe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.32092144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.328757038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.329270476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.347388874Z" level=info msg="Created container b4af56bdb161b6da938b66e394dccd06ba5d008758ce09bb23df77f3330e1426: default/busybox/busybox" id=e4ce9eef-774b-4e67-bc56-ad37b90c3ebe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.350702563Z" level=info msg="Starting container: b4af56bdb161b6da938b66e394dccd06ba5d008758ce09bb23df77f3330e1426" id=532a988a-5ddb-4919-834d-e2acf68c5cad name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:30:19 no-preload-357229 crio[838]: time="2025-11-01T09:30:19.354570616Z" level=info msg="Started container" PID=2559 containerID=b4af56bdb161b6da938b66e394dccd06ba5d008758ce09bb23df77f3330e1426 description=default/busybox/busybox id=532a988a-5ddb-4919-834d-e2acf68c5cad name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b4af56bdb161b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   3e51a7c1cc9f0       busybox                                     default
	932f2eae45318       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   7aca7cd1754cc       storage-provisioner                         kube-system
	a804a7f24e02b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   4140a281c9dce       coredns-66bc5c9577-txw5s                    kube-system
	37a199f3dae57       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   f108d59b1e067       kindnet-lxlsh                               kube-system
	e88be2c3dceb8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   0199976246d70       kube-proxy-2mqtw                            kube-system
	f8557fb994be4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   66ac4308eba8d       kube-controller-manager-no-preload-357229   kube-system
	5973b336805b1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   c1cba1d4d1b87       etcd-no-preload-357229                      kube-system
	e65d000223cf5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   775da8cb42bab       kube-apiserver-no-preload-357229            kube-system
	1ea13b0f7762d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   593b35b2d3d9b       kube-scheduler-no-preload-357229            kube-system
	
	
	==> coredns [a804a7f24e02b387c6c8281a9effe487a2466fc6e6d5ded8d37118ebbb4b6df8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60011 - 63985 "HINFO IN 3024593950968764488.2917337092012312153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004575619s
	
	
	==> describe nodes <==
	Name:               no-preload-357229
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-357229
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-357229
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-357229
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:30:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:30:23 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:30:23 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:30:23 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:30:23 +0000   Sat, 01 Nov 2025 09:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-357229
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7d552ba1-c6de-4d90-ae3f-74806a4aebb4
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-txw5s                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-357229                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-lxlsh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-357229             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-357229    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-2mqtw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-357229             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-357229 event: Registered Node no-preload-357229 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-357229 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.036001] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5973b336805b1a0534f6104945092da63a7235aa5c14ef6eb69c1e4b929f4088] <==
	{"level":"warn","ts":"2025-11-01T09:29:44.450025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.476402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.497889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.560613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.569304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.631831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.635240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.667089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.683627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.719968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.752279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.782139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.806651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.836013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.860648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.882140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.919122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.940603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.968442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:44.988854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:45.017362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:45.070777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:45.100248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:45.120078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:45.338590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:30:27 up 18:12,  0 user,  load average: 3.97, 3.73, 3.04
	Linux no-preload-357229 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37a199f3dae5719c5c5ae4d14caeafc8dea0c0157b92b8487f7b0286181499a1] <==
	I1101 09:30:03.752812       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:30:03.753311       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:30:03.753472       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:30:03.753512       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:30:03.753562       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:30:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:30:03.954487       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:30:03.954562       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:30:03.954601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:30:03.956924       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:30:04.154882       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:30:04.154909       1 metrics.go:72] Registering metrics
	I1101 09:30:04.154957       1 controller.go:711] "Syncing nftables rules"
	I1101 09:30:13.959928       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:30:13.959988       1 main.go:301] handling current node
	I1101 09:30:23.953702       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:30:23.953822       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e65d000223cf5f27cf0fa2236a3084be32ad0de7c774feb507ad01c65468fdc7] <==
	I1101 09:29:47.756030       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:29:47.776469       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:29:47.776608       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1101 09:29:47.820338       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 09:29:47.839510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:29:47.850306       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:29:48.046177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:29:48.076411       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:29:48.107576       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:29:48.107608       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:29:50.065799       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:29:50.189474       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:29:50.437648       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:29:50.456450       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:29:50.457806       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:29:50.466986       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:29:51.252171       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:29:52.107633       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:29:52.140942       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:29:52.161197       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:29:57.066633       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:29:57.071364       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:29:57.263626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:29:57.317574       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:30:24.988330       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45572: use of closed network connection
	
	
	==> kube-controller-manager [f8557fb994be4d56dcd41dc1394f4b484eba8061c9b59d53c8dffbd3ee923604] <==
	I1101 09:29:56.372577       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:29:56.373814       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:29:56.373912       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:29:56.373962       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:29:56.374001       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:29:56.378970       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:29:56.379175       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:29:56.379193       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:29:56.380217       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:29:56.380279       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:29:56.380289       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:29:56.393991       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-357229" podCIDRs=["10.244.0.0/24"]
	I1101 09:29:56.410062       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:29:56.410103       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:29:56.410201       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:29:56.410217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:29:56.410573       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:29:56.412017       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:29:56.429810       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:29:56.429855       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:29:56.439716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:29:56.443017       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:29:56.448224       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:29:56.455905       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:16.319322       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e88be2c3dceb87dd506922c29e7ccec0d262e5b64835fc4b8fcf769dc0e5efaa] <==
	I1101 09:29:58.268236       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:29:58.438033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:58.544037       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:58.544099       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:29:58.544197       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:58.680189       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:29:58.680246       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:58.769311       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:58.769614       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:58.769626       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:58.778527       1 config.go:200] "Starting service config controller"
	I1101 09:29:58.778542       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:58.778557       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:58.778560       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:58.778574       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:58.778578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:58.779169       1 config.go:309] "Starting node config controller"
	I1101 09:29:58.779177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:58.779182       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:58.879133       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:58.879168       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:58.879182       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1ea13b0f7762d35611fbbf5bbd21e0d21a29ceb469cd2f69f4701c7669f93726] <==
	I1101 09:29:48.316434       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:29:51.192082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:29:51.192183       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:51.204339       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:29:51.204430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:29:51.204459       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:29:51.204486       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:29:51.218888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:51.218917       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:29:51.219022       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:29:51.219037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:29:51.304683       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:29:51.319527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:29:51.319672       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:29:56 no-preload-357229 kubelet[2016]: I1101 09:29:56.421878    2016 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453402    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0c15626-2ec0-46cd-9e60-5e539d445218-lib-modules\") pod \"kindnet-lxlsh\" (UID: \"e0c15626-2ec0-46cd-9e60-5e539d445218\") " pod="kube-system/kindnet-lxlsh"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453451    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxk8\" (UniqueName: \"kubernetes.io/projected/e0c15626-2ec0-46cd-9e60-5e539d445218-kube-api-access-rnxk8\") pod \"kindnet-lxlsh\" (UID: \"e0c15626-2ec0-46cd-9e60-5e539d445218\") " pod="kube-system/kindnet-lxlsh"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453478    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcf9\" (UniqueName: \"kubernetes.io/projected/729122cc-da91-48af-9470-0a01890691df-kube-api-access-mmcf9\") pod \"kube-proxy-2mqtw\" (UID: \"729122cc-da91-48af-9470-0a01890691df\") " pod="kube-system/kube-proxy-2mqtw"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453496    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/729122cc-da91-48af-9470-0a01890691df-kube-proxy\") pod \"kube-proxy-2mqtw\" (UID: \"729122cc-da91-48af-9470-0a01890691df\") " pod="kube-system/kube-proxy-2mqtw"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453517    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e0c15626-2ec0-46cd-9e60-5e539d445218-cni-cfg\") pod \"kindnet-lxlsh\" (UID: \"e0c15626-2ec0-46cd-9e60-5e539d445218\") " pod="kube-system/kindnet-lxlsh"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453534    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/729122cc-da91-48af-9470-0a01890691df-xtables-lock\") pod \"kube-proxy-2mqtw\" (UID: \"729122cc-da91-48af-9470-0a01890691df\") " pod="kube-system/kube-proxy-2mqtw"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453550    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/729122cc-da91-48af-9470-0a01890691df-lib-modules\") pod \"kube-proxy-2mqtw\" (UID: \"729122cc-da91-48af-9470-0a01890691df\") " pod="kube-system/kube-proxy-2mqtw"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.453570    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0c15626-2ec0-46cd-9e60-5e539d445218-xtables-lock\") pod \"kindnet-lxlsh\" (UID: \"e0c15626-2ec0-46cd-9e60-5e539d445218\") " pod="kube-system/kindnet-lxlsh"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: I1101 09:29:57.565612    2016 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: W1101 09:29:57.705759    2016 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-f108d59b1e06750ab8aa142d1d19316cbe2d7947c20dd72a3ffdfc6a2c8925ad WatchSource:0}: Error finding container f108d59b1e06750ab8aa142d1d19316cbe2d7947c20dd72a3ffdfc6a2c8925ad: Status 404 returned error can't find the container with id f108d59b1e06750ab8aa142d1d19316cbe2d7947c20dd72a3ffdfc6a2c8925ad
	Nov 01 09:29:57 no-preload-357229 kubelet[2016]: W1101 09:29:57.708863    2016 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-0199976246d70317761fa7a55f5e4c8e32895b5d547aa18efd8e185918e1cae9 WatchSource:0}: Error finding container 0199976246d70317761fa7a55f5e4c8e32895b5d547aa18efd8e185918e1cae9: Status 404 returned error can't find the container with id 0199976246d70317761fa7a55f5e4c8e32895b5d547aa18efd8e185918e1cae9
	Nov 01 09:30:01 no-preload-357229 kubelet[2016]: I1101 09:30:01.081722    2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mqtw" podStartSLOduration=4.081703848 podStartE2EDuration="4.081703848s" podCreationTimestamp="2025-11-01 09:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:29:58.699294553 +0000 UTC m=+6.648828827" watchObservedRunningTime="2025-11-01 09:30:01.081703848 +0000 UTC m=+9.031238114"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.095198    2016 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.141297    2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lxlsh" podStartSLOduration=11.303367621 podStartE2EDuration="17.141275126s" podCreationTimestamp="2025-11-01 09:29:57 +0000 UTC" firstStartedPulling="2025-11-01 09:29:57.716515064 +0000 UTC m=+5.666049330" lastFinishedPulling="2025-11-01 09:30:03.554422569 +0000 UTC m=+11.503956835" observedRunningTime="2025-11-01 09:30:03.688687567 +0000 UTC m=+11.638221850" watchObservedRunningTime="2025-11-01 09:30:14.141275126 +0000 UTC m=+22.090809400"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.198682    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c832644-3e2e-4c30-8ca3-39f6885bcb2b-config-volume\") pod \"coredns-66bc5c9577-txw5s\" (UID: \"5c832644-3e2e-4c30-8ca3-39f6885bcb2b\") " pod="kube-system/coredns-66bc5c9577-txw5s"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.198760    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/885f6151-81b6-4759-893a-a719350ab59b-tmp\") pod \"storage-provisioner\" (UID: \"885f6151-81b6-4759-893a-a719350ab59b\") " pod="kube-system/storage-provisioner"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.198787    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f86k\" (UniqueName: \"kubernetes.io/projected/5c832644-3e2e-4c30-8ca3-39f6885bcb2b-kube-api-access-5f86k\") pod \"coredns-66bc5c9577-txw5s\" (UID: \"5c832644-3e2e-4c30-8ca3-39f6885bcb2b\") " pod="kube-system/coredns-66bc5c9577-txw5s"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.198822    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qms\" (UniqueName: \"kubernetes.io/projected/885f6151-81b6-4759-893a-a719350ab59b-kube-api-access-d8qms\") pod \"storage-provisioner\" (UID: \"885f6151-81b6-4759-893a-a719350ab59b\") " pod="kube-system/storage-provisioner"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: W1101 09:30:14.494621    2016 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-4140a281c9dcead6e53fdbf96d8f4dc34e59310f9ab126d0a9472cf4497e31ca WatchSource:0}: Error finding container 4140a281c9dcead6e53fdbf96d8f4dc34e59310f9ab126d0a9472cf4497e31ca: Status 404 returned error can't find the container with id 4140a281c9dcead6e53fdbf96d8f4dc34e59310f9ab126d0a9472cf4497e31ca
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: I1101 09:30:14.718037    2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-txw5s" podStartSLOduration=17.718019009 podStartE2EDuration="17.718019009s" podCreationTimestamp="2025-11-01 09:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:30:14.693346971 +0000 UTC m=+22.642881237" watchObservedRunningTime="2025-11-01 09:30:14.718019009 +0000 UTC m=+22.667553275"
	Nov 01 09:30:14 no-preload-357229 kubelet[2016]: W1101 09:30:14.757477    2016 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-7aca7cd1754ccf1622885aad3d522bab660e110eb059558526db5562226838b2 WatchSource:0}: Error finding container 7aca7cd1754ccf1622885aad3d522bab660e110eb059558526db5562226838b2: Status 404 returned error can't find the container with id 7aca7cd1754ccf1622885aad3d522bab660e110eb059558526db5562226838b2
	Nov 01 09:30:16 no-preload-357229 kubelet[2016]: I1101 09:30:16.819448    2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.819427511 podStartE2EDuration="16.819427511s" podCreationTimestamp="2025-11-01 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:30:15.695031165 +0000 UTC m=+23.644565447" watchObservedRunningTime="2025-11-01 09:30:16.819427511 +0000 UTC m=+24.768961785"
	Nov 01 09:30:16 no-preload-357229 kubelet[2016]: I1101 09:30:16.941755    2016 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdpg\" (UniqueName: \"kubernetes.io/projected/b87372db-ac84-42f2-8d5e-f821c34ca391-kube-api-access-hqdpg\") pod \"busybox\" (UID: \"b87372db-ac84-42f2-8d5e-f821c34ca391\") " pod="default/busybox"
	Nov 01 09:30:17 no-preload-357229 kubelet[2016]: W1101 09:30:17.148304    2016 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05 WatchSource:0}: Error finding container 3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05: Status 404 returned error can't find the container with id 3e51a7c1cc9f05a1df2d86cf87e37a2360367e33895ac8adf026a403d284bf05
	
	
	==> storage-provisioner [932f2eae4531882aa2338d463b5fae58a58c3b91e3b3534b2e44730b781cc48e] <==
	I1101 09:30:14.842663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:30:14.865707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:30:14.868065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:30:14.870934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:14.879970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:30:14.880222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:30:14.881536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-357229_d58c1b14-f07a-4925-9e95-208079f18669!
	I1101 09:30:14.885060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d74d4812-9df3-4278-ac4b-f8343a00c004", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-357229_d58c1b14-f07a-4925-9e95-208079f18669 became leader
	W1101 09:30:14.889761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:14.900549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:30:14.982397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-357229_d58c1b14-f07a-4925-9e95-208079f18669!
	W1101 09:30:16.904034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:16.909419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:18.912914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:18.923329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:20.927412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:20.935898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:22.939259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:22.944933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:24.956647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:24.972674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:26.976878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:30:26.984663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-357229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.435129ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-312549 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-312549 describe deploy/metrics-server -n kube-system: exit status 1 (119.367316ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-312549 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-312549
helpers_test.go:243: (dbg) docker inspect embed-certs-312549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	        "Created": "2025-11-01T09:30:05.467452429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2499992,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:30:05.535608732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hosts",
	        "LogPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6-json.log",
	        "Name": "/embed-certs-312549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-312549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-312549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	                "LowerDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-312549",
	                "Source": "/var/lib/docker/volumes/embed-certs-312549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-312549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-312549",
	                "name.minikube.sigs.k8s.io": "embed-certs-312549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a300227616349a2949343434170877de0b20270c9f3e9c497e39cd9de9e28a3",
	            "SandboxKey": "/var/run/docker/netns/9a3002276163",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36352"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36353"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-312549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:98:4e:7a:2f:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3dabe0b25d9c671a5a74ecef725675d174c55efcf863b93a552f738453017d3",
	                    "EndpointID": "00a0b4a8f3a69ce7e5715af1f5efc9fcdca6699c6364addea5a7d564caac67eb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-312549",
	                        "46c884efd26a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25: (1.306634648s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-778652                                                                                                                                                                                                                   │ force-systemd-env-778652 │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ delete  │ -p pause-951206                                                                                                                                                                                                                               │ pause-951206             │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:25 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:25 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ cert-options-578478 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478      │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273   │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549       │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:30:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:30:40.937577 2502925 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:30:40.937785 2502925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:40.937813 2502925 out.go:374] Setting ErrFile to fd 2...
	I1101 09:30:40.937832 2502925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:40.938124 2502925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:30:40.938553 2502925 out.go:368] Setting JSON to false
	I1101 09:30:40.939550 2502925 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65587,"bootTime":1761923854,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:30:40.939636 2502925 start.go:143] virtualization:  
	I1101 09:30:40.942629 2502925 out.go:179] * [no-preload-357229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:30:40.946402 2502925 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:30:40.946462 2502925 notify.go:221] Checking for updates...
	I1101 09:30:40.952289 2502925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:30:40.955238 2502925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:30:40.958184 2502925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:30:40.961037 2502925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:30:40.964026 2502925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:30:40.967505 2502925 config.go:182] Loaded profile config "no-preload-357229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:40.968111 2502925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:30:40.991050 2502925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:30:40.991148 2502925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:30:41.051615 2502925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:30:41.040681167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:30:41.051727 2502925 docker.go:319] overlay module found
	I1101 09:30:41.055069 2502925 out.go:179] * Using the docker driver based on existing profile
	I1101 09:30:41.057875 2502925 start.go:309] selected driver: docker
	I1101 09:30:41.057893 2502925 start.go:930] validating driver "docker" against &{Name:no-preload-357229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-357229 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:41.057991 2502925 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:30:41.058685 2502925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:30:41.112551 2502925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:30:41.103907566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:30:41.112901 2502925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:41.112935 2502925 cni.go:84] Creating CNI manager for ""
	I1101 09:30:41.112995 2502925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:30:41.113036 2502925 start.go:353] cluster config:
	{Name:no-preload-357229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-357229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:41.116275 2502925 out.go:179] * Starting "no-preload-357229" primary control-plane node in "no-preload-357229" cluster
	I1101 09:30:41.119216 2502925 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:30:41.122118 2502925 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:30:41.124957 2502925 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:41.125042 2502925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:30:41.125112 2502925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/config.json ...
	I1101 09:30:41.125382 2502925 cache.go:107] acquiring lock: {Name:mk3ad993a5bcb6fcdbac5333cd4ee4cbd712d826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125478 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 09:30:41.125496 2502925 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.986µs
	I1101 09:30:41.125506 2502925 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 09:30:41.125523 2502925 cache.go:107] acquiring lock: {Name:mk519c32bf52e73528865af747bb32cc57d0408d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125559 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 09:30:41.125569 2502925 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 47.522µs
	I1101 09:30:41.125578 2502925 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 09:30:41.125588 2502925 cache.go:107] acquiring lock: {Name:mk5eac40fc8553d7f6860b960980166ab8153e65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125632 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 09:30:41.125641 2502925 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 54.833µs
	I1101 09:30:41.125648 2502925 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 09:30:41.125656 2502925 cache.go:107] acquiring lock: {Name:mkffeb6435cef81f3622c50af353d15d26ad0ea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125688 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 09:30:41.125697 2502925 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.541µs
	I1101 09:30:41.125703 2502925 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 09:30:41.125714 2502925 cache.go:107] acquiring lock: {Name:mk766c6cd632a8f47b2853a5da13a7fbd351d474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125743 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 09:30:41.125752 2502925 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 40.409µs
	I1101 09:30:41.125757 2502925 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 09:30:41.125766 2502925 cache.go:107] acquiring lock: {Name:mkac4a3816cc4616e08989beca9745b4c30eeed2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125796 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1101 09:30:41.125804 2502925 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.072µs
	I1101 09:30:41.125821 2502925 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 09:30:41.125830 2502925 cache.go:107] acquiring lock: {Name:mkfe0793eba6cb27e149c33f4de7d1013be1b41d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125859 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 09:30:41.125868 2502925 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.13µs
	I1101 09:30:41.125874 2502925 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 09:30:41.125885 2502925 cache.go:107] acquiring lock: {Name:mkec1c7b45bc3765a51da51eabf48a16fbf02fc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.125915 2502925 cache.go:115] /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 09:30:41.125924 2502925 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 39.474µs
	I1101 09:30:41.125930 2502925 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 09:30:41.125936 2502925 cache.go:87] Successfully saved all images to host disk.
	I1101 09:30:41.144022 2502925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:30:41.144045 2502925 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:30:41.144063 2502925 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:30:41.144094 2502925 start.go:360] acquireMachinesLock for no-preload-357229: {Name:mkd99047151835979e6d5408ce868fcfc70837af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:41.144159 2502925 start.go:364] duration metric: took 45.496µs to acquireMachinesLock for "no-preload-357229"
	I1101 09:30:41.144183 2502925 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:30:41.144191 2502925 fix.go:54] fixHost starting: 
	I1101 09:30:41.144455 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:41.163634 2502925 fix.go:112] recreateIfNeeded on no-preload-357229: state=Stopped err=<nil>
	W1101 09:30:41.163660 2502925 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:30:40.146240 2499376 addons.go:515] duration metric: took 992.673857ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:30:40.554084 2499376 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-312549" context rescaled to 1 replicas
	W1101 09:30:42.050146 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	I1101 09:30:41.166913 2502925 out.go:252] * Restarting existing docker container for "no-preload-357229" ...
	I1101 09:30:41.167024 2502925 cli_runner.go:164] Run: docker start no-preload-357229
	I1101 09:30:41.444733 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:41.476279 2502925 kic.go:430] container "no-preload-357229" state is running.
	I1101 09:30:41.476704 2502925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-357229
	I1101 09:30:41.500459 2502925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/config.json ...
	I1101 09:30:41.500708 2502925 machine.go:94] provisionDockerMachine start ...
	I1101 09:30:41.500768 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:41.530795 2502925 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:41.531122 2502925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36355 <nil> <nil>}
	I1101 09:30:41.531138 2502925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:30:41.533281 2502925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49314->127.0.0.1:36355: read: connection reset by peer
	I1101 09:30:44.683458 2502925 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-357229
	
	I1101 09:30:44.683483 2502925 ubuntu.go:182] provisioning hostname "no-preload-357229"
	I1101 09:30:44.683551 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:44.701475 2502925 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:44.701793 2502925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36355 <nil> <nil>}
	I1101 09:30:44.701837 2502925 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-357229 && echo "no-preload-357229" | sudo tee /etc/hostname
	I1101 09:30:44.856789 2502925 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-357229
	
	I1101 09:30:44.856914 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:44.874178 2502925 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:44.874489 2502925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36355 <nil> <nil>}
	I1101 09:30:44.874513 2502925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-357229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-357229/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-357229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:30:45.037922 2502925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:30:45.037950 2502925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:30:45.037995 2502925 ubuntu.go:190] setting up certificates
	I1101 09:30:45.038007 2502925 provision.go:84] configureAuth start
	I1101 09:30:45.038098 2502925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-357229
	I1101 09:30:45.068675 2502925 provision.go:143] copyHostCerts
	I1101 09:30:45.068870 2502925 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:30:45.068937 2502925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:30:45.070513 2502925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:30:45.070691 2502925 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:30:45.070700 2502925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:30:45.070737 2502925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:30:45.070795 2502925 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:30:45.070800 2502925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:30:45.070825 2502925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:30:45.070879 2502925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.no-preload-357229 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-357229]
	I1101 09:30:45.217050 2502925 provision.go:177] copyRemoteCerts
	I1101 09:30:45.217158 2502925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:30:45.217233 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:45.241135 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:45.371338 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:30:45.397052 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:30:45.418921 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:30:45.436616 2502925 provision.go:87] duration metric: took 398.567642ms to configureAuth
	I1101 09:30:45.436646 2502925 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:30:45.436855 2502925 config.go:182] Loaded profile config "no-preload-357229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:45.436973 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:45.458224 2502925 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:45.458661 2502925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36355 <nil> <nil>}
	I1101 09:30:45.458697 2502925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:30:45.770163 2502925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:30:45.770183 2502925 machine.go:97] duration metric: took 4.269464875s to provisionDockerMachine
	I1101 09:30:45.770194 2502925 start.go:293] postStartSetup for "no-preload-357229" (driver="docker")
	I1101 09:30:45.770204 2502925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:30:45.770262 2502925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:30:45.770308 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:45.791583 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:45.895347 2502925 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:30:45.898523 2502925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:30:45.898549 2502925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:30:45.898559 2502925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:30:45.898609 2502925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:30:45.898685 2502925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:30:45.898787 2502925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:30:45.905882 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:30:45.923466 2502925 start.go:296] duration metric: took 153.25656ms for postStartSetup
	I1101 09:30:45.923544 2502925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:30:45.923597 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:45.940302 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:46.044963 2502925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:30:46.051478 2502925 fix.go:56] duration metric: took 4.907267347s for fixHost
	I1101 09:30:46.051503 2502925 start.go:83] releasing machines lock for "no-preload-357229", held for 4.907331681s
	I1101 09:30:46.051566 2502925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-357229
	I1101 09:30:46.068147 2502925 ssh_runner.go:195] Run: cat /version.json
	I1101 09:30:46.068175 2502925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:30:46.068197 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:46.068238 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:46.088402 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:46.109361 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:46.280363 2502925 ssh_runner.go:195] Run: systemctl --version
	I1101 09:30:46.286607 2502925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:30:46.328353 2502925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:30:46.334423 2502925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:30:46.334490 2502925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:30:46.342093 2502925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:30:46.342117 2502925 start.go:496] detecting cgroup driver to use...
	I1101 09:30:46.342148 2502925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:30:46.342193 2502925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:30:46.357204 2502925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:30:46.370290 2502925 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:30:46.370363 2502925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:30:46.385737 2502925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:30:46.398771 2502925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:30:46.508091 2502925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:30:46.619715 2502925 docker.go:234] disabling docker service ...
	I1101 09:30:46.619787 2502925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:30:46.636547 2502925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:30:46.649492 2502925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:30:46.771092 2502925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:30:46.887550 2502925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:30:46.901178 2502925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:30:46.915212 2502925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:30:46.915319 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.924678 2502925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:30:46.924747 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.933200 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.941428 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.950459 2502925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:30:46.958743 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.967250 2502925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.975780 2502925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:46.985720 2502925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:30:46.993633 2502925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:30:47.000874 2502925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:47.128685 2502925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:30:47.263845 2502925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:30:47.264040 2502925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:30:47.268195 2502925 start.go:564] Will wait 60s for crictl version
	I1101 09:30:47.268301 2502925 ssh_runner.go:195] Run: which crictl
	I1101 09:30:47.271962 2502925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:30:47.295406 2502925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:30:47.295550 2502925 ssh_runner.go:195] Run: crio --version
	I1101 09:30:47.331992 2502925 ssh_runner.go:195] Run: crio --version
	I1101 09:30:47.370342 2502925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:30:44.551284 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:30:47.050149 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	I1101 09:30:47.373257 2502925 cli_runner.go:164] Run: docker network inspect no-preload-357229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:30:47.389418 2502925 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:30:47.393219 2502925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:30:47.402536 2502925 kubeadm.go:884] updating cluster {Name:no-preload-357229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-357229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:30:47.402645 2502925 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:47.402690 2502925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:30:47.436351 2502925 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:30:47.436373 2502925 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:30:47.436381 2502925 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 09:30:47.436470 2502925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-357229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-357229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:30:47.436557 2502925 ssh_runner.go:195] Run: crio config
	I1101 09:30:47.497852 2502925 cni.go:84] Creating CNI manager for ""
	I1101 09:30:47.497880 2502925 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:30:47.497904 2502925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:30:47.497926 2502925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-357229 NodeName:no-preload-357229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:30:47.498068 2502925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-357229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:30:47.498139 2502925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:30:47.506830 2502925 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:30:47.506903 2502925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:30:47.514911 2502925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:30:47.527611 2502925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:30:47.540905 2502925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:30:47.559142 2502925 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:30:47.563065 2502925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:30:47.574415 2502925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:47.693372 2502925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:47.709272 2502925 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229 for IP: 192.168.85.2
	I1101 09:30:47.709295 2502925 certs.go:195] generating shared ca certs ...
	I1101 09:30:47.709311 2502925 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:47.709448 2502925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:30:47.709503 2502925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:30:47.709515 2502925 certs.go:257] generating profile certs ...
	I1101 09:30:47.709604 2502925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.key
	I1101 09:30:47.709670 2502925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/apiserver.key.b9ab13a6
	I1101 09:30:47.709721 2502925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/proxy-client.key
	I1101 09:30:47.709835 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:30:47.709871 2502925 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:30:47.709888 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:30:47.709915 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:30:47.709950 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:30:47.709973 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:30:47.710024 2502925 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:30:47.710606 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:30:47.732389 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:30:47.751660 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:30:47.770354 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:30:47.790262 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:30:47.810042 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:30:47.831754 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:30:47.860916 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:30:47.880613 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:30:47.909703 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:30:47.932525 2502925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:30:47.954664 2502925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:30:47.968209 2502925 ssh_runner.go:195] Run: openssl version
	I1101 09:30:47.974792 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:30:47.983740 2502925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:30:47.987421 2502925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:30:47.987537 2502925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:30:48.029425 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:30:48.038228 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:30:48.047011 2502925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:30:48.052560 2502925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:30:48.052650 2502925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:30:48.095552 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:30:48.103981 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:30:48.112432 2502925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:48.116123 2502925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:48.116187 2502925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:48.161234 2502925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:30:48.169373 2502925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:30:48.173421 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:30:48.215962 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:30:48.257496 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:30:48.300192 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:30:48.343147 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:30:48.396003 2502925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:30:48.479090 2502925 kubeadm.go:401] StartCluster: {Name:no-preload-357229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-357229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:48.479223 2502925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:30:48.479332 2502925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:30:48.553965 2502925 cri.go:89] found id: "66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976"
	I1101 09:30:48.553986 2502925 cri.go:89] found id: "ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f"
	I1101 09:30:48.553991 2502925 cri.go:89] found id: "e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90"
	I1101 09:30:48.553995 2502925 cri.go:89] found id: ""
	I1101 09:30:48.554043 2502925 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:30:48.575095 2502925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:30:48Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:30:48.575185 2502925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:30:48.593719 2502925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:30:48.593742 2502925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:30:48.593810 2502925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:30:48.604446 2502925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:30:48.605295 2502925 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-357229" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:30:48.605825 2502925 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-357229" cluster setting kubeconfig missing "no-preload-357229" context setting]
	I1101 09:30:48.606607 2502925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:48.608462 2502925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:30:48.619184 2502925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:30:48.619226 2502925 kubeadm.go:602] duration metric: took 25.46889ms to restartPrimaryControlPlane
	I1101 09:30:48.619235 2502925 kubeadm.go:403] duration metric: took 140.156689ms to StartCluster
	I1101 09:30:48.619250 2502925 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:48.619326 2502925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:30:48.620878 2502925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:48.621118 2502925 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:30:48.621534 2502925 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:30:48.621609 2502925 addons.go:70] Setting storage-provisioner=true in profile "no-preload-357229"
	I1101 09:30:48.621631 2502925 addons.go:239] Setting addon storage-provisioner=true in "no-preload-357229"
	W1101 09:30:48.621642 2502925 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:30:48.621663 2502925 host.go:66] Checking if "no-preload-357229" exists ...
	I1101 09:30:48.622161 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:48.622320 2502925 config.go:182] Loaded profile config "no-preload-357229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:48.622382 2502925 addons.go:70] Setting dashboard=true in profile "no-preload-357229"
	I1101 09:30:48.622396 2502925 addons.go:239] Setting addon dashboard=true in "no-preload-357229"
	W1101 09:30:48.622410 2502925 addons.go:248] addon dashboard should already be in state true
	I1101 09:30:48.622431 2502925 host.go:66] Checking if "no-preload-357229" exists ...
	I1101 09:30:48.622903 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:48.624605 2502925 addons.go:70] Setting default-storageclass=true in profile "no-preload-357229"
	I1101 09:30:48.624848 2502925 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-357229"
	I1101 09:30:48.626062 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:48.627898 2502925 out.go:179] * Verifying Kubernetes components...
	I1101 09:30:48.642100 2502925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:48.673027 2502925 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:30:48.677212 2502925 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:30:48.677234 2502925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:30:48.677299 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:48.694818 2502925 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:30:48.695536 2502925 addons.go:239] Setting addon default-storageclass=true in "no-preload-357229"
	W1101 09:30:48.695549 2502925 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:30:48.695572 2502925 host.go:66] Checking if "no-preload-357229" exists ...
	I1101 09:30:48.697213 2502925 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:30:48.707906 2502925 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:30:48.710868 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:48.710886 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:30:48.710978 2502925 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:30:48.711050 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:48.750125 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:48.757556 2502925 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:30:48.757580 2502925 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:30:48.757655 2502925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:30:48.787057 2502925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:30:49.012712 2502925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:49.031603 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:30:49.031629 2502925 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:30:49.038194 2502925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:30:49.056580 2502925 node_ready.go:35] waiting up to 6m0s for node "no-preload-357229" to be "Ready" ...
	I1101 09:30:49.078419 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:30:49.078445 2502925 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:30:49.102556 2502925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:30:49.118569 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:30:49.118598 2502925 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:30:49.146517 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:30:49.146546 2502925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:30:49.201535 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:30:49.201566 2502925 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:30:49.226159 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:30:49.226183 2502925 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:30:49.253495 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:30:49.253520 2502925 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:30:49.324483 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:30:49.324523 2502925 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:30:49.390713 2502925 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:30:49.390738 2502925 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:30:49.419266 2502925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 09:30:49.050287 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:30:51.050692 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:30:53.550069 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	I1101 09:30:53.068636 2502925 node_ready.go:49] node "no-preload-357229" is "Ready"
	I1101 09:30:53.068673 2502925 node_ready.go:38] duration metric: took 4.012036136s for node "no-preload-357229" to be "Ready" ...
	I1101 09:30:53.068688 2502925 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:53.068744 2502925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:54.299743 2502925 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.261509969s)
	I1101 09:30:54.299803 2502925 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.197222627s)
	I1101 09:30:54.300088 2502925 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.880791186s)
	I1101 09:30:54.300228 2502925 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.231467654s)
	I1101 09:30:54.300249 2502925 api_server.go:72] duration metric: took 5.679102708s to wait for apiserver process to appear ...
	I1101 09:30:54.300255 2502925 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:54.300271 2502925 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:30:54.303040 2502925 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-357229 addons enable metrics-server
	
	I1101 09:30:54.310609 2502925 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:30:54.311105 2502925 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:30:54.312175 2502925 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:54.312199 2502925 api_server.go:131] duration metric: took 11.937023ms to wait for apiserver health ...
	I1101 09:30:54.312208 2502925 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:54.313606 2502925 addons.go:515] duration metric: took 5.692059773s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:30:54.315845 2502925 system_pods.go:59] 8 kube-system pods found
	I1101 09:30:54.315949 2502925 system_pods.go:61] "coredns-66bc5c9577-txw5s" [5c832644-3e2e-4c30-8ca3-39f6885bcb2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:54.315959 2502925 system_pods.go:61] "etcd-no-preload-357229" [a63fcd49-ae1b-43ce-b495-cc8cc64e7fa9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:54.315969 2502925 system_pods.go:61] "kindnet-lxlsh" [e0c15626-2ec0-46cd-9e60-5e539d445218] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:30:54.315976 2502925 system_pods.go:61] "kube-apiserver-no-preload-357229" [2338f847-b546-4fee-8ef7-b2e93e09276e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:54.315982 2502925 system_pods.go:61] "kube-controller-manager-no-preload-357229" [8aa33e99-9e5f-495f-8304-6c7db573fde0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:54.315991 2502925 system_pods.go:61] "kube-proxy-2mqtw" [729122cc-da91-48af-9470-0a01890691df] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:30:54.315998 2502925 system_pods.go:61] "kube-scheduler-no-preload-357229" [9912cbd9-29f1-4ff7-bbe0-66a5e8b4b4a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:54.316005 2502925 system_pods.go:61] "storage-provisioner" [885f6151-81b6-4759-893a-a719350ab59b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:54.316010 2502925 system_pods.go:74] duration metric: took 3.79749ms to wait for pod list to return data ...
	I1101 09:30:54.316020 2502925 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:54.319088 2502925 default_sa.go:45] found service account: "default"
	I1101 09:30:54.319151 2502925 default_sa.go:55] duration metric: took 3.124829ms for default service account to be created ...
	I1101 09:30:54.319176 2502925 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:54.334344 2502925 system_pods.go:86] 8 kube-system pods found
	I1101 09:30:54.334380 2502925 system_pods.go:89] "coredns-66bc5c9577-txw5s" [5c832644-3e2e-4c30-8ca3-39f6885bcb2b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:54.334389 2502925 system_pods.go:89] "etcd-no-preload-357229" [a63fcd49-ae1b-43ce-b495-cc8cc64e7fa9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:54.334396 2502925 system_pods.go:89] "kindnet-lxlsh" [e0c15626-2ec0-46cd-9e60-5e539d445218] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:30:54.334441 2502925 system_pods.go:89] "kube-apiserver-no-preload-357229" [2338f847-b546-4fee-8ef7-b2e93e09276e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:54.334449 2502925 system_pods.go:89] "kube-controller-manager-no-preload-357229" [8aa33e99-9e5f-495f-8304-6c7db573fde0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:54.334461 2502925 system_pods.go:89] "kube-proxy-2mqtw" [729122cc-da91-48af-9470-0a01890691df] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:30:54.334467 2502925 system_pods.go:89] "kube-scheduler-no-preload-357229" [9912cbd9-29f1-4ff7-bbe0-66a5e8b4b4a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:54.334479 2502925 system_pods.go:89] "storage-provisioner" [885f6151-81b6-4759-893a-a719350ab59b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:54.334499 2502925 system_pods.go:126] duration metric: took 15.310985ms to wait for k8s-apps to be running ...
	I1101 09:30:54.334516 2502925 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:54.334600 2502925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:54.353463 2502925 system_svc.go:56] duration metric: took 18.938232ms WaitForService to wait for kubelet
	I1101 09:30:54.353491 2502925 kubeadm.go:587] duration metric: took 5.732342701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:54.353509 2502925 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:54.356544 2502925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:30:54.356575 2502925 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:54.356593 2502925 node_conditions.go:105] duration metric: took 3.078396ms to run NodePressure ...
	I1101 09:30:54.356606 2502925 start.go:242] waiting for startup goroutines ...
	I1101 09:30:54.356631 2502925 start.go:247] waiting for cluster config update ...
	I1101 09:30:54.356648 2502925 start.go:256] writing updated cluster config ...
	I1101 09:30:54.356983 2502925 ssh_runner.go:195] Run: rm -f paused
	I1101 09:30:54.361416 2502925 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:54.416085 2502925 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-txw5s" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:30:55.550173 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:30:58.050677 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:30:56.422058 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:30:58.422297 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:00.428148 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:00.052021 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:02.549745 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:02.447716 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:04.922656 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:05.053034 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:07.549473 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:07.421845 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:09.922386 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:10.049985 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:12.550427 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:11.922826 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:14.421687 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:15.050004 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:17.050351 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	W1101 09:31:16.921521 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:19.421785 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:19.054274 2499376 node_ready.go:57] node "embed-certs-312549" has "Ready":"False" status (will retry)
	I1101 09:31:21.049416 2499376 node_ready.go:49] node "embed-certs-312549" is "Ready"
	I1101 09:31:21.049448 2499376 node_ready.go:38] duration metric: took 41.002956186s for node "embed-certs-312549" to be "Ready" ...
	I1101 09:31:21.049462 2499376 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:31:21.049521 2499376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:31:21.085008 2499376 api_server.go:72] duration metric: took 41.931361925s to wait for apiserver process to appear ...
	I1101 09:31:21.085033 2499376 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:31:21.085052 2499376 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:31:21.093335 2499376 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:31:21.094361 2499376 api_server.go:141] control plane version: v1.34.1
	I1101 09:31:21.094404 2499376 api_server.go:131] duration metric: took 9.363991ms to wait for apiserver health ...
	I1101 09:31:21.094415 2499376 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:31:21.097465 2499376 system_pods.go:59] 8 kube-system pods found
	I1101 09:31:21.097502 2499376 system_pods.go:61] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:31:21.097509 2499376 system_pods.go:61] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running
	I1101 09:31:21.097515 2499376 system_pods.go:61] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:31:21.097519 2499376 system_pods.go:61] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running
	I1101 09:31:21.097524 2499376 system_pods.go:61] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running
	I1101 09:31:21.097529 2499376 system_pods.go:61] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:31:21.097534 2499376 system_pods.go:61] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running
	I1101 09:31:21.097547 2499376 system_pods.go:61] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:31:21.097561 2499376 system_pods.go:74] duration metric: took 3.139441ms to wait for pod list to return data ...
	I1101 09:31:21.097571 2499376 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:31:21.099917 2499376 default_sa.go:45] found service account: "default"
	I1101 09:31:21.099937 2499376 default_sa.go:55] duration metric: took 2.35845ms for default service account to be created ...
	I1101 09:31:21.099946 2499376 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:31:21.102655 2499376 system_pods.go:86] 8 kube-system pods found
	I1101 09:31:21.102688 2499376 system_pods.go:89] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:31:21.102695 2499376 system_pods.go:89] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running
	I1101 09:31:21.102701 2499376 system_pods.go:89] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:31:21.102706 2499376 system_pods.go:89] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running
	I1101 09:31:21.102712 2499376 system_pods.go:89] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running
	I1101 09:31:21.102716 2499376 system_pods.go:89] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:31:21.102721 2499376 system_pods.go:89] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running
	I1101 09:31:21.102732 2499376 system_pods.go:89] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:31:21.102754 2499376 retry.go:31] will retry after 204.425993ms: missing components: kube-dns
	I1101 09:31:21.314589 2499376 system_pods.go:86] 8 kube-system pods found
	I1101 09:31:21.314674 2499376 system_pods.go:89] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:31:21.314695 2499376 system_pods.go:89] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running
	I1101 09:31:21.314716 2499376 system_pods.go:89] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:31:21.314748 2499376 system_pods.go:89] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running
	I1101 09:31:21.314773 2499376 system_pods.go:89] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running
	I1101 09:31:21.314791 2499376 system_pods.go:89] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:31:21.314810 2499376 system_pods.go:89] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running
	I1101 09:31:21.314830 2499376 system_pods.go:89] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:31:21.314870 2499376 retry.go:31] will retry after 371.730895ms: missing components: kube-dns
	I1101 09:31:21.696379 2499376 system_pods.go:86] 8 kube-system pods found
	I1101 09:31:21.696407 2499376 system_pods.go:89] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Running
	I1101 09:31:21.696425 2499376 system_pods.go:89] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running
	I1101 09:31:21.696430 2499376 system_pods.go:89] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:31:21.696434 2499376 system_pods.go:89] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running
	I1101 09:31:21.696439 2499376 system_pods.go:89] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running
	I1101 09:31:21.696443 2499376 system_pods.go:89] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:31:21.696448 2499376 system_pods.go:89] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running
	I1101 09:31:21.696453 2499376 system_pods.go:89] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Running
	I1101 09:31:21.696461 2499376 system_pods.go:126] duration metric: took 596.508869ms to wait for k8s-apps to be running ...
	I1101 09:31:21.696468 2499376 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:31:21.696526 2499376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:31:21.714389 2499376 system_svc.go:56] duration metric: took 17.91134ms WaitForService to wait for kubelet
	I1101 09:31:21.714416 2499376 kubeadm.go:587] duration metric: took 42.560773299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:31:21.714435 2499376 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:31:21.718409 2499376 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:31:21.718438 2499376 node_conditions.go:123] node cpu capacity is 2
	I1101 09:31:21.718451 2499376 node_conditions.go:105] duration metric: took 4.010678ms to run NodePressure ...
	I1101 09:31:21.718465 2499376 start.go:242] waiting for startup goroutines ...
	I1101 09:31:21.718472 2499376 start.go:247] waiting for cluster config update ...
	I1101 09:31:21.718484 2499376 start.go:256] writing updated cluster config ...
	I1101 09:31:21.718770 2499376 ssh_runner.go:195] Run: rm -f paused
	I1101 09:31:21.723768 2499376 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:21.729298 2499376 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.735120 2499376 pod_ready.go:94] pod "coredns-66bc5c9577-jnqnt" is "Ready"
	I1101 09:31:21.735195 2499376 pod_ready.go:86] duration metric: took 5.87164ms for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.738086 2499376 pod_ready.go:83] waiting for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.746016 2499376 pod_ready.go:94] pod "etcd-embed-certs-312549" is "Ready"
	I1101 09:31:21.746046 2499376 pod_ready.go:86] duration metric: took 7.891892ms for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.748503 2499376 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.752912 2499376 pod_ready.go:94] pod "kube-apiserver-embed-certs-312549" is "Ready"
	I1101 09:31:21.752983 2499376 pod_ready.go:86] duration metric: took 4.456295ms for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:21.759498 2499376 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:22.128084 2499376 pod_ready.go:94] pod "kube-controller-manager-embed-certs-312549" is "Ready"
	I1101 09:31:22.128123 2499376 pod_ready.go:86] duration metric: took 368.59079ms for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:22.331366 2499376 pod_ready.go:83] waiting for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:22.728407 2499376 pod_ready.go:94] pod "kube-proxy-8d2xs" is "Ready"
	I1101 09:31:22.728443 2499376 pod_ready.go:86] duration metric: took 397.000916ms for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:22.931775 2499376 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.333605 2499376 pod_ready.go:94] pod "kube-scheduler-embed-certs-312549" is "Ready"
	I1101 09:31:23.333638 2499376 pod_ready.go:86] duration metric: took 401.762293ms for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.333651 2499376 pod_ready.go:40] duration metric: took 1.609815471s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:23.395819 2499376 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:31:23.399176 2499376 out.go:179] * Done! kubectl is now configured to use "embed-certs-312549" cluster and "default" namespace by default
	W1101 09:31:21.921521 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:23.922782 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:26.421521 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	W1101 09:31:28.921584 2502925 pod_ready.go:104] pod "coredns-66bc5c9577-txw5s" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:31:21 embed-certs-312549 crio[835]: time="2025-11-01T09:31:21.326085379Z" level=info msg="Created container 4f5a83143fb07e4d8adf93607318d2d3ad5d0695f64135e36cc62cc9b2cffe6e: kube-system/coredns-66bc5c9577-jnqnt/coredns" id=5d3a3b3b-1c23-428f-ad8a-6f59d4303bf8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:21 embed-certs-312549 crio[835]: time="2025-11-01T09:31:21.329652771Z" level=info msg="Starting container: 4f5a83143fb07e4d8adf93607318d2d3ad5d0695f64135e36cc62cc9b2cffe6e" id=2d3865b2-2775-4771-a5f2-a55de94ea40f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:21 embed-certs-312549 crio[835]: time="2025-11-01T09:31:21.334667746Z" level=info msg="Started container" PID=1751 containerID=4f5a83143fb07e4d8adf93607318d2d3ad5d0695f64135e36cc62cc9b2cffe6e description=kube-system/coredns-66bc5c9577-jnqnt/coredns id=2d3865b2-2775-4771-a5f2-a55de94ea40f name=/runtime.v1.RuntimeService/StartContainer sandboxID=1673f8af0f0871339a592b0e91e91c35a594411f5115787f23e0c20bf5374b5c
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.937062631Z" level=info msg="Running pod sandbox: default/busybox/POD" id=49ebce5d-41ea-4dbc-bc8c-aeae15bf9463 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.937133308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.948913518Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:53933533d4cf15350cdb0945a3202b6d22d7bcd64da9a72ecd753f6628cb9852 UID:66d83383-a6d0-4f40-997c-921df4348491 NetNS:/var/run/netns/c8216e9a-d244-4b77-8eb0-415f5345a056 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c638}] Aliases:map[]}"
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.949087356Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.957548422Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:53933533d4cf15350cdb0945a3202b6d22d7bcd64da9a72ecd753f6628cb9852 UID:66d83383-a6d0-4f40-997c-921df4348491 NetNS:/var/run/netns/c8216e9a-d244-4b77-8eb0-415f5345a056 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c638}] Aliases:map[]}"
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.957873869Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.960819297Z" level=info msg="Ran pod sandbox 53933533d4cf15350cdb0945a3202b6d22d7bcd64da9a72ecd753f6628cb9852 with infra container: default/busybox/POD" id=49ebce5d-41ea-4dbc-bc8c-aeae15bf9463 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.963674742Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0413e84d-b080-44c1-a30d-81c9d43dd451 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.964200996Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0413e84d-b080-44c1-a30d-81c9d43dd451 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.964350169Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0413e84d-b080-44c1-a30d-81c9d43dd451 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.968364778Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=94f8a3dd-2cf9-43a4-807e-5e8e81aae9b7 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:31:23 embed-certs-312549 crio[835]: time="2025-11-01T09:31:23.970785847Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.974102252Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=94f8a3dd-2cf9-43a4-807e-5e8e81aae9b7 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.978278809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5ef4551-a52c-4bb4-a305-23349db95386 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.98059639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f5445252-73e2-4fbd-8f0f-923d9aa84b90 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.986247179Z" level=info msg="Creating container: default/busybox/busybox" id=8f165e2b-b1ec-4b3a-bd53-ecb4026005fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.986370999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.991094404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:25 embed-certs-312549 crio[835]: time="2025-11-01T09:31:25.991725427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:26 embed-certs-312549 crio[835]: time="2025-11-01T09:31:26.009938362Z" level=info msg="Created container dc8c5d2db348ed35364287d7d4ca9d54b1ac621f7625260e18d304610ef69e55: default/busybox/busybox" id=8f165e2b-b1ec-4b3a-bd53-ecb4026005fa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:26 embed-certs-312549 crio[835]: time="2025-11-01T09:31:26.012463248Z" level=info msg="Starting container: dc8c5d2db348ed35364287d7d4ca9d54b1ac621f7625260e18d304610ef69e55" id=456073fc-64d8-4552-a53a-3f30a337de38 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:26 embed-certs-312549 crio[835]: time="2025-11-01T09:31:26.016386027Z" level=info msg="Started container" PID=1806 containerID=dc8c5d2db348ed35364287d7d4ca9d54b1ac621f7625260e18d304610ef69e55 description=default/busybox/busybox id=456073fc-64d8-4552-a53a-3f30a337de38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=53933533d4cf15350cdb0945a3202b6d22d7bcd64da9a72ecd753f6628cb9852
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	dc8c5d2db348e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   53933533d4cf1       busybox                                      default
	4f5a83143fb07       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   1673f8af0f087       coredns-66bc5c9577-jnqnt                     kube-system
	01a8476e1ae19       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   9a76cee6e8196       storage-provisioner                          kube-system
	801582d430c4c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   349fa11f63703       kube-proxy-8d2xs                             kube-system
	1287c14a045a9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   5bf2c94420250       kindnet-xzrpm                                kube-system
	75978ef31cc6c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   68ae10450d753       kube-scheduler-embed-certs-312549            kube-system
	8c39bded90edf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6dbc4210e6a68       etcd-embed-certs-312549                      kube-system
	f6175a988334a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   643e30a72058b       kube-apiserver-embed-certs-312549            kube-system
	e81e5ca5af851       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   53a2044d28310       kube-controller-manager-embed-certs-312549   kube-system
	
	
	==> coredns [4f5a83143fb07e4d8adf93607318d2d3ad5d0695f64135e36cc62cc9b2cffe6e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35248 - 2672 "HINFO IN 363772795813304967.2791583142604515349. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022935545s
	
	
	==> describe nodes <==
	Name:               embed-certs-312549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-312549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-312549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_30_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:30:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-312549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:31:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:31:20 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:31:20 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:31:20 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:31:20 +0000   Sat, 01 Nov 2025 09:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-312549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                9d18f598-7720-463f-91f2-ddc5b6ab87e3
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-jnqnt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-312549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-xzrpm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-312549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-312549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-8d2xs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-312549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node embed-certs-312549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node embed-certs-312549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node embed-certs-312549 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node embed-certs-312549 event: Registered Node embed-certs-312549 in Controller
	  Normal   NodeReady                14s   kubelet          Node embed-certs-312549 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c39bded90edf462782d721a963c51ec4d51e42ab238e3465d664dfb4a74e0b5] <==
	{"level":"warn","ts":"2025-11-01T09:30:29.292929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.347796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.387985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.409625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.434480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.461329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.482187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.508170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.529703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.572115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.597941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.625278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.663968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.687755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.704164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.735973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.767017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.806569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.832371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.849314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.877682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.911312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.932283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:29.961851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:30.140761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44308","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:31:34 up 18:14,  0 user,  load average: 2.38, 3.34, 2.95
	Linux embed-certs-312549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1287c14a045a9fa4f4f305b89352e09aaa9f4f7471bd573ab61f52e6561d93a8] <==
	I1101 09:30:40.358193       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:30:40.358704       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:30:40.358843       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:30:40.358861       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:30:40.358876       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:30:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:30:40.649100       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:30:40.649117       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:30:40.649126       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:30:40.649805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:31:10.649862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:31:10.650035       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:31:10.650239       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:31:10.650292       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:31:12.249729       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:31:12.249831       1 metrics.go:72] Registering metrics
	I1101 09:31:12.249939       1 controller.go:711] "Syncing nftables rules"
	I1101 09:31:20.655958       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:31:20.656086       1 main.go:301] handling current node
	I1101 09:31:30.651960       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:31:30.652017       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f6175a988334a0a11e9ac23dda5c95e608438de16bf8bfd5e0523c861f32e1f4] <==
	I1101 09:30:31.076592       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:30:31.078035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:30:31.081829       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:31.081893       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:30:31.083027       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:30:31.124438       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:31.124673       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:30:31.751702       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:30:31.756647       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:30:31.756667       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:30:32.429743       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:30:32.479782       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:30:32.571354       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:30:32.579697       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:30:32.580945       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:30:32.586084       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:30:32.974708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:30:33.496046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:30:33.513639       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:30:33.528387       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:30:37.978089       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:30:38.677935       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:38.683548       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:39.192663       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:31:32.764820       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46980: use of closed network connection
	
	
	==> kube-controller-manager [e81e5ca5af851e3872ba233e8d8d6147c6ba06fa90af73761b1c4047406cf789] <==
	I1101 09:30:37.988945       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:30:37.998098       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:30:38.009637       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:38.012867       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:30:38.018575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:30:38.018976       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:30:38.019052       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:30:38.019118       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-312549"
	I1101 09:30:38.019170       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:30:38.020887       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:30:38.020992       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:30:38.021008       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:30:38.022121       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:30:38.023910       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:30:38.024059       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:30:38.025889       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:30:38.027651       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:38.028726       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:30:38.028781       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:30:38.028803       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:30:38.028812       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:30:38.028818       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:30:38.053479       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-312549" podCIDRs=["10.244.0.0/24"]
	I1101 09:30:39.130870       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1101 09:31:23.026805       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [801582d430c4c607fa8363a5a1797f57622d3d54bf52983b861de45bde074cf4] <==
	I1101 09:30:40.398976       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:30:40.489836       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:30:40.504048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:30:40.504153       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:30:40.504247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:30:40.535907       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:30:40.535960       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:30:40.539570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:30:40.539960       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:30:40.539980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:40.541374       1 config.go:200] "Starting service config controller"
	I1101 09:30:40.541392       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:30:40.541408       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:30:40.541412       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:30:40.541422       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:30:40.541426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:30:40.542102       1 config.go:309] "Starting node config controller"
	I1101 09:30:40.542118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:30:40.542125       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:30:40.642584       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:30:40.642620       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:30:40.642672       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [75978ef31cc6c2f63079ae2f454ea08c38ac928b5128249f9b3f013dccc87cd8] <==
	E1101 09:30:31.040572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:30:31.040639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:30:31.040695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:30:31.040756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:30:31.040811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:30:31.040873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:30:31.040924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:30:31.040975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:30:31.041029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:30:31.042452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:30:31.042530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:30:31.042631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:30:31.855479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:30:31.882880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:30:31.949516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:30:32.041776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:30:32.066232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:30:32.101278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:30:32.108098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:30:32.134909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:30:32.137077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:30:32.147842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:30:32.157133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:30:32.169592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1101 09:30:34.309162       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:30:38 embed-certs-312549 kubelet[1313]: I1101 09:30:38.113622    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: E1101 09:30:39.302902    1313 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-312549\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-312549' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392746    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7bfac1f-401f-4f8d-8584-a5240e63915f-xtables-lock\") pod \"kube-proxy-8d2xs\" (UID: \"d7bfac1f-401f-4f8d-8584-a5240e63915f\") " pod="kube-system/kube-proxy-8d2xs"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392820    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9336823d-a6b8-44ac-ba96-9242d7ea9873-cni-cfg\") pod \"kindnet-xzrpm\" (UID: \"9336823d-a6b8-44ac-ba96-9242d7ea9873\") " pod="kube-system/kindnet-xzrpm"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392839    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9336823d-a6b8-44ac-ba96-9242d7ea9873-xtables-lock\") pod \"kindnet-xzrpm\" (UID: \"9336823d-a6b8-44ac-ba96-9242d7ea9873\") " pod="kube-system/kindnet-xzrpm"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392862    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqgnx\" (UniqueName: \"kubernetes.io/projected/9336823d-a6b8-44ac-ba96-9242d7ea9873-kube-api-access-zqgnx\") pod \"kindnet-xzrpm\" (UID: \"9336823d-a6b8-44ac-ba96-9242d7ea9873\") " pod="kube-system/kindnet-xzrpm"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392885    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9336823d-a6b8-44ac-ba96-9242d7ea9873-lib-modules\") pod \"kindnet-xzrpm\" (UID: \"9336823d-a6b8-44ac-ba96-9242d7ea9873\") " pod="kube-system/kindnet-xzrpm"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392914    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vm26\" (UniqueName: \"kubernetes.io/projected/d7bfac1f-401f-4f8d-8584-a5240e63915f-kube-api-access-9vm26\") pod \"kube-proxy-8d2xs\" (UID: \"d7bfac1f-401f-4f8d-8584-a5240e63915f\") " pod="kube-system/kube-proxy-8d2xs"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392962    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7bfac1f-401f-4f8d-8584-a5240e63915f-kube-proxy\") pod \"kube-proxy-8d2xs\" (UID: \"d7bfac1f-401f-4f8d-8584-a5240e63915f\") " pod="kube-system/kube-proxy-8d2xs"
	Nov 01 09:30:39 embed-certs-312549 kubelet[1313]: I1101 09:30:39.392994    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7bfac1f-401f-4f8d-8584-a5240e63915f-lib-modules\") pod \"kube-proxy-8d2xs\" (UID: \"d7bfac1f-401f-4f8d-8584-a5240e63915f\") " pod="kube-system/kube-proxy-8d2xs"
	Nov 01 09:30:40 embed-certs-312549 kubelet[1313]: I1101 09:30:40.141861    1313 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:30:40 embed-certs-312549 kubelet[1313]: W1101 09:30:40.219730    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/crio-5bf2c944202504336fb5a77b38d6dccb8b8dbaa00c6e464b3e08ff0185822ce5 WatchSource:0}: Error finding container 5bf2c944202504336fb5a77b38d6dccb8b8dbaa00c6e464b3e08ff0185822ce5: Status 404 returned error can't find the container with id 5bf2c944202504336fb5a77b38d6dccb8b8dbaa00c6e464b3e08ff0185822ce5
	Nov 01 09:30:40 embed-certs-312549 kubelet[1313]: W1101 09:30:40.231209    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/crio-349fa11f637036dee5fe15911d3df276a30e3dc2a64df2fc52fa8d8519ade44e WatchSource:0}: Error finding container 349fa11f637036dee5fe15911d3df276a30e3dc2a64df2fc52fa8d8519ade44e: Status 404 returned error can't find the container with id 349fa11f637036dee5fe15911d3df276a30e3dc2a64df2fc52fa8d8519ade44e
	Nov 01 09:30:40 embed-certs-312549 kubelet[1313]: I1101 09:30:40.646036    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xzrpm" podStartSLOduration=1.646016875 podStartE2EDuration="1.646016875s" podCreationTimestamp="2025-11-01 09:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:30:40.645070522 +0000 UTC m=+7.320073017" watchObservedRunningTime="2025-11-01 09:30:40.646016875 +0000 UTC m=+7.321019379"
	Nov 01 09:30:40 embed-certs-312549 kubelet[1313]: I1101 09:30:40.646182    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8d2xs" podStartSLOduration=1.646176247 podStartE2EDuration="1.646176247s" podCreationTimestamp="2025-11-01 09:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:30:40.61137548 +0000 UTC m=+7.286377984" watchObservedRunningTime="2025-11-01 09:30:40.646176247 +0000 UTC m=+7.321178759"
	Nov 01 09:31:20 embed-certs-312549 kubelet[1313]: I1101 09:31:20.889952    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:31:20 embed-certs-312549 kubelet[1313]: E1101 09:31:20.929779    1313 status_manager.go:1018] "Failed to get status for pod" err="pods \"storage-provisioner\" is forbidden: User \"system:node:embed-certs-312549\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-312549' and this object" podUID="74ce420a-03e3-4f7c-b544-860b65f44d69" pod="kube-system/storage-provisioner"
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: I1101 09:31:21.028634    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwb6\" (UniqueName: \"kubernetes.io/projected/74ce420a-03e3-4f7c-b544-860b65f44d69-kube-api-access-7mwb6\") pod \"storage-provisioner\" (UID: \"74ce420a-03e3-4f7c-b544-860b65f44d69\") " pod="kube-system/storage-provisioner"
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: I1101 09:31:21.028689    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpgd\" (UniqueName: \"kubernetes.io/projected/9c241743-79ee-45ae-a369-2b4407cec026-kube-api-access-jgpgd\") pod \"coredns-66bc5c9577-jnqnt\" (UID: \"9c241743-79ee-45ae-a369-2b4407cec026\") " pod="kube-system/coredns-66bc5c9577-jnqnt"
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: I1101 09:31:21.028715    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74ce420a-03e3-4f7c-b544-860b65f44d69-tmp\") pod \"storage-provisioner\" (UID: \"74ce420a-03e3-4f7c-b544-860b65f44d69\") " pod="kube-system/storage-provisioner"
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: I1101 09:31:21.028739    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c241743-79ee-45ae-a369-2b4407cec026-config-volume\") pod \"coredns-66bc5c9577-jnqnt\" (UID: \"9c241743-79ee-45ae-a369-2b4407cec026\") " pod="kube-system/coredns-66bc5c9577-jnqnt"
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: W1101 09:31:21.274124    1313 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/crio-1673f8af0f0871339a592b0e91e91c35a594411f5115787f23e0c20bf5374b5c WatchSource:0}: Error finding container 1673f8af0f0871339a592b0e91e91c35a594411f5115787f23e0c20bf5374b5c: Status 404 returned error can't find the container with id 1673f8af0f0871339a592b0e91e91c35a594411f5115787f23e0c20bf5374b5c
	Nov 01 09:31:21 embed-certs-312549 kubelet[1313]: I1101 09:31:21.692790    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.692771417 podStartE2EDuration="41.692771417s" podCreationTimestamp="2025-11-01 09:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:31:21.679483819 +0000 UTC m=+48.354486331" watchObservedRunningTime="2025-11-01 09:31:21.692771417 +0000 UTC m=+48.367773921"
	Nov 01 09:31:23 embed-certs-312549 kubelet[1313]: I1101 09:31:23.621758    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jnqnt" podStartSLOduration=45.621715821 podStartE2EDuration="45.621715821s" podCreationTimestamp="2025-11-01 09:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:31:21.695580135 +0000 UTC m=+48.370582639" watchObservedRunningTime="2025-11-01 09:31:23.621715821 +0000 UTC m=+50.296718341"
	Nov 01 09:31:23 embed-certs-312549 kubelet[1313]: I1101 09:31:23.746479    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtzl\" (UniqueName: \"kubernetes.io/projected/66d83383-a6d0-4f40-997c-921df4348491-kube-api-access-hwtzl\") pod \"busybox\" (UID: \"66d83383-a6d0-4f40-997c-921df4348491\") " pod="default/busybox"
	
	
	==> storage-provisioner [01a8476e1ae1926a2e71d7bac6ac902877d65b3acb3460464397ed7c5566e208] <==
	I1101 09:31:21.323669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:31:21.355276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:31:21.360090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:31:21.382450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:21.390675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:21.390933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:31:21.392255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f44664d7-ce86-4249-89be-cbecba2dd10b", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-312549_4599a4a8-ce40-4596-906d-9c82b85ce2a3 became leader
	I1101 09:31:21.392583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_4599a4a8-ce40-4596-906d-9c82b85ce2a3!
	W1101 09:31:21.399658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:21.408520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:21.493141       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_4599a4a8-ce40-4596-906d-9c82b85ce2a3!
	W1101 09:31:23.413568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:23.422208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.425040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.431517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.434530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.439054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.442661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.448980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.452602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.457162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.462547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.470404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-312549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-357229 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-357229 --alsologtostderr -v=1: exit status 80 (2.019406818s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-357229 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:48.094551 2506076 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:48.094810 2506076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.094843 2506076 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:48.094879 2506076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.095246 2506076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:31:48.096290 2506076 out.go:368] Setting JSON to false
	I1101 09:31:48.096364 2506076 mustload.go:66] Loading cluster: no-preload-357229
	I1101 09:31:48.097016 2506076 config.go:182] Loaded profile config "no-preload-357229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:48.097865 2506076 cli_runner.go:164] Run: docker container inspect no-preload-357229 --format={{.State.Status}}
	I1101 09:31:48.126622 2506076 host.go:66] Checking if "no-preload-357229" exists ...
	I1101 09:31:48.126933 2506076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:48.216153 2506076 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:31:48.205943559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:48.216801 2506076 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-357229 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:31:48.220043 2506076 out.go:179] * Pausing node no-preload-357229 ... 
	I1101 09:31:48.223713 2506076 host.go:66] Checking if "no-preload-357229" exists ...
	I1101 09:31:48.224172 2506076 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:48.224222 2506076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-357229
	I1101 09:31:48.258743 2506076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36355 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/no-preload-357229/id_rsa Username:docker}
	I1101 09:31:48.371505 2506076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:31:48.387268 2506076 pause.go:52] kubelet running: true
	I1101 09:31:48.387360 2506076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:31:48.704342 2506076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:31:48.704421 2506076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:31:48.834408 2506076 cri.go:89] found id: "9de81cbcd3c30c458328c720dd937efe1790c7a91522e7bad9fcd94e49c9d97d"
	I1101 09:31:48.834431 2506076 cri.go:89] found id: "1438c362f8e0c3aa8b9a453bd141bf368984df37861d76c34aa0000a95c7a3b1"
	I1101 09:31:48.834436 2506076 cri.go:89] found id: "59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926"
	I1101 09:31:48.834440 2506076 cri.go:89] found id: "771f2b304e60c1b1d5959ab96ba0831e10eea22c8ac40a0169fea3da6d8acaba"
	I1101 09:31:48.834444 2506076 cri.go:89] found id: "976f4eb8d6c55dd3e124804e23e125f941974fcaab7c4d5e5d0326c26e5c577e"
	I1101 09:31:48.834447 2506076 cri.go:89] found id: "66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976"
	I1101 09:31:48.834450 2506076 cri.go:89] found id: "68f7fcc91b8befc150b5fb790881da1ad70f3bfe9fa8eb19146693bc1a766b36"
	I1101 09:31:48.834453 2506076 cri.go:89] found id: "ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f"
	I1101 09:31:48.834457 2506076 cri.go:89] found id: "e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90"
	I1101 09:31:48.834464 2506076 cri.go:89] found id: "7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	I1101 09:31:48.834467 2506076 cri.go:89] found id: "c30acba8410231f1908e91c30349b403dc6788563c3ff2988167f6eb869003eb"
	I1101 09:31:48.834470 2506076 cri.go:89] found id: ""
	I1101 09:31:48.834518 2506076 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:48.846727 2506076 retry.go:31] will retry after 285.583381ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:48Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:31:49.133083 2506076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:31:49.154181 2506076 pause.go:52] kubelet running: false
	I1101 09:31:49.154247 2506076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:31:49.336963 2506076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:31:49.337040 2506076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:31:49.400422 2506076 cri.go:89] found id: "9de81cbcd3c30c458328c720dd937efe1790c7a91522e7bad9fcd94e49c9d97d"
	I1101 09:31:49.400441 2506076 cri.go:89] found id: "1438c362f8e0c3aa8b9a453bd141bf368984df37861d76c34aa0000a95c7a3b1"
	I1101 09:31:49.400446 2506076 cri.go:89] found id: "59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926"
	I1101 09:31:49.400450 2506076 cri.go:89] found id: "771f2b304e60c1b1d5959ab96ba0831e10eea22c8ac40a0169fea3da6d8acaba"
	I1101 09:31:49.400453 2506076 cri.go:89] found id: "976f4eb8d6c55dd3e124804e23e125f941974fcaab7c4d5e5d0326c26e5c577e"
	I1101 09:31:49.400457 2506076 cri.go:89] found id: "66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976"
	I1101 09:31:49.400460 2506076 cri.go:89] found id: "68f7fcc91b8befc150b5fb790881da1ad70f3bfe9fa8eb19146693bc1a766b36"
	I1101 09:31:49.400464 2506076 cri.go:89] found id: "ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f"
	I1101 09:31:49.400467 2506076 cri.go:89] found id: "e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90"
	I1101 09:31:49.400493 2506076 cri.go:89] found id: "7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	I1101 09:31:49.400498 2506076 cri.go:89] found id: "c30acba8410231f1908e91c30349b403dc6788563c3ff2988167f6eb869003eb"
	I1101 09:31:49.400501 2506076 cri.go:89] found id: ""
	I1101 09:31:49.400552 2506076 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:49.410997 2506076 retry.go:31] will retry after 314.2245ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:49Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:31:49.725522 2506076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:31:49.737857 2506076 pause.go:52] kubelet running: false
	I1101 09:31:49.737972 2506076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:31:49.902961 2506076 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:31:49.903041 2506076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:31:49.967024 2506076 cri.go:89] found id: "9de81cbcd3c30c458328c720dd937efe1790c7a91522e7bad9fcd94e49c9d97d"
	I1101 09:31:49.967048 2506076 cri.go:89] found id: "1438c362f8e0c3aa8b9a453bd141bf368984df37861d76c34aa0000a95c7a3b1"
	I1101 09:31:49.967053 2506076 cri.go:89] found id: "59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926"
	I1101 09:31:49.967057 2506076 cri.go:89] found id: "771f2b304e60c1b1d5959ab96ba0831e10eea22c8ac40a0169fea3da6d8acaba"
	I1101 09:31:49.967060 2506076 cri.go:89] found id: "976f4eb8d6c55dd3e124804e23e125f941974fcaab7c4d5e5d0326c26e5c577e"
	I1101 09:31:49.967064 2506076 cri.go:89] found id: "66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976"
	I1101 09:31:49.967067 2506076 cri.go:89] found id: "68f7fcc91b8befc150b5fb790881da1ad70f3bfe9fa8eb19146693bc1a766b36"
	I1101 09:31:49.967070 2506076 cri.go:89] found id: "ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f"
	I1101 09:31:49.967073 2506076 cri.go:89] found id: "e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90"
	I1101 09:31:49.967097 2506076 cri.go:89] found id: "7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	I1101 09:31:49.967106 2506076 cri.go:89] found id: "c30acba8410231f1908e91c30349b403dc6788563c3ff2988167f6eb869003eb"
	I1101 09:31:49.967126 2506076 cri.go:89] found id: ""
	I1101 09:31:49.967188 2506076 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:49.981500 2506076 out.go:203] 
	W1101 09:31:49.984334 2506076 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:49.984353 2506076 out.go:285] * 
	* 
	W1101 09:31:49.996830 2506076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:49.999888 2506076 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-357229 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-357229
helpers_test.go:243: (dbg) docker inspect no-preload-357229:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	        "Created": "2025-11-01T09:29:04.610428393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2503053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:30:41.198686789Z",
	            "FinishedAt": "2025-11-01T09:30:40.395443423Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hosts",
	        "LogPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9-json.log",
	        "Name": "/no-preload-357229",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-357229:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-357229",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	                "LowerDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-357229",
	                "Source": "/var/lib/docker/volumes/no-preload-357229/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-357229",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-357229",
	                "name.minikube.sigs.k8s.io": "no-preload-357229",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94f2581520ff7f6866b905d40a18b223d3837b592ad77efc8099d4e0e784f349",
	            "SandboxKey": "/var/run/docker/netns/94f2581520ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36355"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36359"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36357"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36358"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-357229": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:80:a2:49:db:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c399d9cfbf1bf49ecabecfc0553884dd8ceaaa3ff2f3c1310f3dc120db9b811",
	                    "EndpointID": "28bcb971d271569a4bed9762543a9fb909e46efa3c0dbeb71b8c2b2f297dfc2b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-357229",
	                        "6863b4e551e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229: exit status 2 (428.826427ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-357229 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-357229 logs -n 25: (1.211744477s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478    │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478    │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:31:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:31:48.071999 2506068 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:48.072212 2506068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.072224 2506068 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:48.072229 2506068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.072517 2506068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:31:48.072976 2506068 out.go:368] Setting JSON to false
	I1101 09:31:48.079543 2506068 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65654,"bootTime":1761923854,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:31:48.079630 2506068 start.go:143] virtualization:  
	I1101 09:31:48.083749 2506068 out.go:179] * [embed-certs-312549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:31:48.087806 2506068 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:31:48.087915 2506068 notify.go:221] Checking for updates...
	I1101 09:31:48.094132 2506068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:31:48.097208 2506068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:31:48.100167 2506068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:31:48.104005 2506068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:31:48.106987 2506068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:31:48.110464 2506068 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:48.111044 2506068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:31:48.148993 2506068 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:31:48.149109 2506068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:48.258431 2506068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:48.247996481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:48.259303 2506068 docker.go:319] overlay module found
	I1101 09:31:48.264692 2506068 out.go:179] * Using the docker driver based on existing profile
	I1101 09:31:48.267544 2506068 start.go:309] selected driver: docker
	I1101 09:31:48.267557 2506068 start.go:930] validating driver "docker" against &{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:48.267654 2506068 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:31:48.268438 2506068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:48.340362 2506068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:31:48.331586885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:48.340710 2506068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:31:48.340742 2506068 cni.go:84] Creating CNI manager for ""
	I1101 09:31:48.340806 2506068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:31:48.340846 2506068 start.go:353] cluster config:
	{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:48.345856 2506068 out.go:179] * Starting "embed-certs-312549" primary control-plane node in "embed-certs-312549" cluster
	I1101 09:31:48.348737 2506068 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:31:48.351628 2506068 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:31:48.354405 2506068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:48.354459 2506068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:31:48.354469 2506068 cache.go:59] Caching tarball of preloaded images
	I1101 09:31:48.354506 2506068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:31:48.354549 2506068 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:31:48.354560 2506068 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:31:48.354682 2506068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/config.json ...
	I1101 09:31:48.377785 2506068 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:31:48.377817 2506068 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:31:48.377829 2506068 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:31:48.377860 2506068 start.go:360] acquireMachinesLock for embed-certs-312549: {Name:mkc891654a695438e19d0a82e76ef43fc02ba964 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:31:48.377911 2506068 start.go:364] duration metric: took 35.149µs to acquireMachinesLock for "embed-certs-312549"
	I1101 09:31:48.377930 2506068 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:31:48.377935 2506068 fix.go:54] fixHost starting: 
	I1101 09:31:48.378184 2506068 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:31:48.395206 2506068 fix.go:112] recreateIfNeeded on embed-certs-312549: state=Stopped err=<nil>
	W1101 09:31:48.395239 2506068 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.363891997Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.36985399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.370323761Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.370362275Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.381236Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.381279962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.38129839Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393220464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393373633Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393464076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.399522107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.399695074Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.919840478Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5aa0a164-0595-4a22-9b71-6f164b8e1886 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.920875371Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b2f8935-05c6-4012-88b6-0409934d65d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.921742169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=1f74c048-b36f-42d5-bcd2-730b45e5fb56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.92184048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.929023904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.929691438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.955951682Z" level=info msg="Created container 7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=1f74c048-b36f-42d5-bcd2-730b45e5fb56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.956920058Z" level=info msg="Starting container: 7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564" id=fe130744-6a63-4f7e-853c-05cc7c79e993 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.958448951Z" level=info msg="Started container" PID=1733 containerID=7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper id=fe130744-6a63-4f7e-853c-05cc7c79e993 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f
	Nov 01 09:31:42 no-preload-357229 conmon[1731]: conmon 7831ce5b5d1470589fdb <ninfo>: container 1733 exited with status 1
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.14320455Z" level=info msg="Removing container: 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.15390251Z" level=info msg="Error loading conmon cgroup of container 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b: cgroup deleted" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.157149812Z" level=info msg="Removed container 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7831ce5b5d147       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   f204861758bef       dashboard-metrics-scraper-6ffb444bf9-6h7gx   kubernetes-dashboard
	9de81cbcd3c30       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   0bfb4bbc09a4e       storage-provisioner                          kube-system
	c30acba841023       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   f995f1f4e16b7       kubernetes-dashboard-855c9754f9-r6mtl        kubernetes-dashboard
	1438c362f8e0c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   19081e3fce3e7       coredns-66bc5c9577-txw5s                     kube-system
	8bd8813a30ca1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   7ad6e33aa48b3       busybox                                      default
	59ba904954f25       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           56 seconds ago       Exited              storage-provisioner         1                   0bfb4bbc09a4e       storage-provisioner                          kube-system
	771f2b304e60c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   dcc8d5b7566a7       kindnet-lxlsh                                kube-system
	976f4eb8d6c55       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   fb2bf5742da24       kube-proxy-2mqtw                             kube-system
	66bc439562b33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   97e5aebeadcd4       kube-apiserver-no-preload-357229             kube-system
	68f7fcc91b8be       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   07a3db5ad95af       etcd-no-preload-357229                       kube-system
	ae5869cf712bf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ed342b3f7651d       kube-controller-manager-no-preload-357229    kube-system
	e3948966e2df7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bcd0c46a12b45       kube-scheduler-no-preload-357229             kube-system
	
	
	==> coredns [1438c362f8e0c3aa8b9a453bd141bf368984df37861d76c34aa0000a95c7a3b1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47095 - 1105 "HINFO IN 1393672097041498670.3145267040789559047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050925308s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-357229
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-357229
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-357229
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-357229
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:31:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-357229
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7d552ba1-c6de-4d90-ae3f-74806a4aebb4
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-txw5s                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-357229                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-lxlsh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-357229              250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-357229     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-2mqtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-357229              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6h7gx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-r6mtl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 112s                   kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    119s                   kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 119s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  119s                   kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     119s                   kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                   node-controller  Node no-preload-357229 event: Registered Node no-preload-357229 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-357229 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)      kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)      kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)      kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node no-preload-357229 event: Registered Node no-preload-357229 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [68f7fcc91b8befc150b5fb790881da1ad70f3bfe9fa8eb19146693bc1a766b36] <==
	{"level":"warn","ts":"2025-11-01T09:30:51.567473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.585317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.617753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.636951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.653588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.675333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.694194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.708718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.724444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.760186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.769638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.801381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.820771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.827637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.850334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.869614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.885981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.908957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.919140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.930472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.953331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.984469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.000561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.014524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.123354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:31:51 up 18:14,  0 user,  load average: 2.06, 3.22, 2.92
	Linux no-preload-357229 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [771f2b304e60c1b1d5959ab96ba0831e10eea22c8ac40a0169fea3da6d8acaba] <==
	I1101 09:30:55.052600       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:30:55.058775       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:30:55.059025       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:30:55.059140       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:30:55.059188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:30:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:30:55.354245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:30:55.354310       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:30:55.354341       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:30:55.354754       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:31:25.355253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:31:25.355382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:31:25.355436       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:31:25.355454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:31:26.854556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:31:26.854593       1 metrics.go:72] Registering metrics
	I1101 09:31:26.854647       1 controller.go:711] "Syncing nftables rules"
	I1101 09:31:35.355746       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:31:35.355780       1 main.go:301] handling current node
	I1101 09:31:45.354930       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:31:45.354974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976] <==
	I1101 09:30:53.127482       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:30:53.127498       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:30:53.133472       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:53.133530       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:30:53.133568       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:30:53.134092       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:30:53.158256       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:30:53.172011       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:30:53.172034       1 policy_source.go:240] refreshing policies
	I1101 09:30:53.195449       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:30:53.195809       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:30:53.201801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:30:53.209831       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:30:53.222900       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:30:53.712459       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:30:53.782726       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:30:53.834508       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:30:53.873128       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:30:53.893417       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:30:53.910964       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:30:53.988342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.34.75"}
	I1101 09:30:54.057787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.222.228"}
	I1101 09:30:56.535211       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:30:56.834699       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:30:56.884406       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f] <==
	I1101 09:30:56.398483       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:30:56.398698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:56.401359       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:30:56.406103       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:30:56.406633       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:30:56.410254       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:30:56.413484       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:30:56.418815       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:30:56.418917       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:30:56.420315       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:30:56.428779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:56.428873       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:30:56.428904       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:30:56.429013       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:30:56.430131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:30:56.430204       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:30:56.430295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:56.433937       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:30:56.446371       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:30:56.452530       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:30:56.452633       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:30:56.452725       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-357229"
	I1101 09:30:56.452780       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:30:56.462958       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:30:56.472071       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [976f4eb8d6c55dd3e124804e23e125f941974fcaab7c4d5e5d0326c26e5c577e] <==
	I1101 09:30:55.157183       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:30:55.241791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:30:55.344219       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:30:55.344250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:30:55.344324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:30:55.366278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:30:55.366347       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:30:55.370267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:30:55.370589       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:30:55.370612       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:55.372424       1 config.go:200] "Starting service config controller"
	I1101 09:30:55.372444       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:30:55.372462       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:30:55.372466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:30:55.372477       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:30:55.372481       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:30:55.373350       1 config.go:309] "Starting node config controller"
	I1101 09:30:55.373368       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:30:55.373376       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:30:55.472761       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:30:55.472794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:30:55.472824       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90] <==
	W1101 09:30:52.819036       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:30:52.819076       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:30:52.819093       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:30:52.819100       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:30:53.033775       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:30:53.035192       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:53.044271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:53.044406       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:53.044871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:30:53.044897       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:30:53.074238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:30:53.099675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:30:53.099760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:30:53.099823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:30:53.100566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:30:53.100654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:30:53.100727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:30:53.100788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:30:53.100848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:30:53.100900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:30:53.100943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:30:53.100984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:30:53.101030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:30:53.101110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1101 09:30:54.647344       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:30:57 no-preload-357229 kubelet[770]: W1101 09:30:57.390421     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f WatchSource:0}: Error finding container f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f: Status 404 returned error can't find the container with id f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f
	Nov 01 09:31:03 no-preload-357229 kubelet[770]: I1101 09:31:03.079953     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:31:05 no-preload-357229 kubelet[770]: I1101 09:31:05.997678     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-r6mtl" podStartSLOduration=5.047874158 podStartE2EDuration="9.997660949s" podCreationTimestamp="2025-11-01 09:30:56 +0000 UTC" firstStartedPulling="2025-11-01 09:30:57.38042059 +0000 UTC m=+9.669402289" lastFinishedPulling="2025-11-01 09:31:02.330207365 +0000 UTC m=+14.619189080" observedRunningTime="2025-11-01 09:31:03.034445589 +0000 UTC m=+15.323427288" watchObservedRunningTime="2025-11-01 09:31:05.997660949 +0000 UTC m=+18.286642648"
	Nov 01 09:31:07 no-preload-357229 kubelet[770]: I1101 09:31:07.023473     770 scope.go:117] "RemoveContainer" containerID="702b00bea8f9d2b54c7d8bf363e115ef79eb32ef5258ab8d135784894800e292"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: I1101 09:31:08.029085     770 scope.go:117] "RemoveContainer" containerID="702b00bea8f9d2b54c7d8bf363e115ef79eb32ef5258ab8d135784894800e292"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: I1101 09:31:08.029409     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: E1101 09:31:08.029550     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:09 no-preload-357229 kubelet[770]: I1101 09:31:09.034927     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:09 no-preload-357229 kubelet[770]: E1101 09:31:09.035572     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:18 no-preload-357229 kubelet[770]: I1101 09:31:18.004125     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: I1101 09:31:19.069212     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: I1101 09:31:19.070149     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: E1101 09:31:19.070305     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:26 no-preload-357229 kubelet[770]: I1101 09:31:26.096456     770 scope.go:117] "RemoveContainer" containerID="59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926"
	Nov 01 09:31:28 no-preload-357229 kubelet[770]: I1101 09:31:28.004150     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:28 no-preload-357229 kubelet[770]: E1101 09:31:28.005002     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:42 no-preload-357229 kubelet[770]: I1101 09:31:42.919236     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: I1101 09:31:43.141552     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: I1101 09:31:43.141738     770 scope.go:117] "RemoveContainer" containerID="7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: E1101 09:31:43.141917     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:48 no-preload-357229 kubelet[770]: I1101 09:31:48.004415     770 scope.go:117] "RemoveContainer" containerID="7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	Nov 01 09:31:48 no-preload-357229 kubelet[770]: E1101 09:31:48.005304     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:48 no-preload-357229 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:31:48 no-preload-357229 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:31:48 no-preload-357229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c30acba8410231f1908e91c30349b403dc6788563c3ff2988167f6eb869003eb] <==
	2025/11/01 09:31:02 Using namespace: kubernetes-dashboard
	2025/11/01 09:31:02 Using in-cluster config to connect to apiserver
	2025/11/01 09:31:02 Using secret token for csrf signing
	2025/11/01 09:31:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:31:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:31:02 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:31:02 Generating JWE encryption key
	2025/11/01 09:31:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:31:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:31:03 Initializing JWE encryption key from synchronized object
	2025/11/01 09:31:03 Creating in-cluster Sidecar client
	2025/11/01 09:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:31:03 Serving insecurely on HTTP port: 9090
	2025/11/01 09:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:31:02 Starting overwatch
	
	
	==> storage-provisioner [59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926] <==
	I1101 09:30:55.121985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:31:25.126618       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9de81cbcd3c30c458328c720dd937efe1790c7a91522e7bad9fcd94e49c9d97d] <==
	I1101 09:31:26.146244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:31:26.158380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:31:26.159352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:31:26.162032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.618977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.878650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.476783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:40.529987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.552596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.561353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:43.561596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:31:43.561826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd!
	I1101 09:31:43.562517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d74d4812-9df3-4278-ac4b-f8343a00c004", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd became leader
	W1101 09:31:43.567663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.572903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:43.662266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd!
	W1101 09:31:45.575411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.579782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:47.586103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:47.592272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:49.595334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:49.599689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:51.603173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:51.608569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-357229 -n no-preload-357229: exit status 2 (383.048792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-357229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-357229
helpers_test.go:243: (dbg) docker inspect no-preload-357229:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	        "Created": "2025-11-01T09:29:04.610428393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2503053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:30:41.198686789Z",
	            "FinishedAt": "2025-11-01T09:30:40.395443423Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/hosts",
	        "LogPath": "/var/lib/docker/containers/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9-json.log",
	        "Name": "/no-preload-357229",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-357229:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-357229",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9",
	                "LowerDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5cc099e0674d206e58658b98596baea6d36e69290a8f09a34c31e1233de8e33b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-357229",
	                "Source": "/var/lib/docker/volumes/no-preload-357229/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-357229",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-357229",
	                "name.minikube.sigs.k8s.io": "no-preload-357229",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94f2581520ff7f6866b905d40a18b223d3837b592ad77efc8099d4e0e784f349",
	            "SandboxKey": "/var/run/docker/netns/94f2581520ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36355"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36359"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36357"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36358"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-357229": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:80:a2:49:db:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c399d9cfbf1bf49ecabecfc0553884dd8ceaaa3ff2f3c1310f3dc120db9b811",
	                    "EndpointID": "28bcb971d271569a4bed9762543a9fb909e46efa3c0dbeb71b8c2b2f297dfc2b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-357229",
	                        "6863b4e551e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229: exit status 2 (432.657804ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-357229 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-357229 logs -n 25: (1.491219838s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-578478 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-578478    │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p cert-options-578478                                                                                                                                                                                                                        │ cert-options-578478    │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-068218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │                     │
	│ stop    │ -p old-k8s-version-068218 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:27 UTC │
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549     │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229      │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:31:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:31:48.071999 2506068 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:48.072212 2506068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.072224 2506068 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:48.072229 2506068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.072517 2506068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:31:48.072976 2506068 out.go:368] Setting JSON to false
	I1101 09:31:48.079543 2506068 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65654,"bootTime":1761923854,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:31:48.079630 2506068 start.go:143] virtualization:  
	I1101 09:31:48.083749 2506068 out.go:179] * [embed-certs-312549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:31:48.087806 2506068 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:31:48.087915 2506068 notify.go:221] Checking for updates...
	I1101 09:31:48.094132 2506068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:31:48.097208 2506068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:31:48.100167 2506068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:31:48.104005 2506068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:31:48.106987 2506068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:31:48.110464 2506068 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:48.111044 2506068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:31:48.148993 2506068 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:31:48.149109 2506068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:48.258431 2506068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:48.247996481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:48.259303 2506068 docker.go:319] overlay module found
	I1101 09:31:48.264692 2506068 out.go:179] * Using the docker driver based on existing profile
	I1101 09:31:48.267544 2506068 start.go:309] selected driver: docker
	I1101 09:31:48.267557 2506068 start.go:930] validating driver "docker" against &{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:48.267654 2506068 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:31:48.268438 2506068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:48.340362 2506068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:31:48.331586885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:48.340710 2506068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:31:48.340742 2506068 cni.go:84] Creating CNI manager for ""
	I1101 09:31:48.340806 2506068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:31:48.340846 2506068 start.go:353] cluster config:
	{Name:embed-certs-312549 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-312549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:48.345856 2506068 out.go:179] * Starting "embed-certs-312549" primary control-plane node in "embed-certs-312549" cluster
	I1101 09:31:48.348737 2506068 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:31:48.351628 2506068 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:31:48.354405 2506068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:48.354459 2506068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:31:48.354469 2506068 cache.go:59] Caching tarball of preloaded images
	I1101 09:31:48.354506 2506068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:31:48.354549 2506068 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:31:48.354560 2506068 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:31:48.354682 2506068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/embed-certs-312549/config.json ...
	I1101 09:31:48.377785 2506068 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:31:48.377817 2506068 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:31:48.377829 2506068 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:31:48.377860 2506068 start.go:360] acquireMachinesLock for embed-certs-312549: {Name:mkc891654a695438e19d0a82e76ef43fc02ba964 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:31:48.377911 2506068 start.go:364] duration metric: took 35.149µs to acquireMachinesLock for "embed-certs-312549"
	I1101 09:31:48.377930 2506068 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:31:48.377935 2506068 fix.go:54] fixHost starting: 
	I1101 09:31:48.378184 2506068 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:31:48.395206 2506068 fix.go:112] recreateIfNeeded on embed-certs-312549: state=Stopped err=<nil>
	W1101 09:31:48.395239 2506068 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.363891997Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.36985399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.370323761Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.370362275Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.381236Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.381279962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.38129839Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393220464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393373633Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.393464076Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.399522107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:31:35 no-preload-357229 crio[652]: time="2025-11-01T09:31:35.399695074Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.919840478Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5aa0a164-0595-4a22-9b71-6f164b8e1886 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.920875371Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b2f8935-05c6-4012-88b6-0409934d65d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.921742169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=1f74c048-b36f-42d5-bcd2-730b45e5fb56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.92184048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.929023904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.929691438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.955951682Z" level=info msg="Created container 7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=1f74c048-b36f-42d5-bcd2-730b45e5fb56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.956920058Z" level=info msg="Starting container: 7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564" id=fe130744-6a63-4f7e-853c-05cc7c79e993 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:42 no-preload-357229 crio[652]: time="2025-11-01T09:31:42.958448951Z" level=info msg="Started container" PID=1733 containerID=7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper id=fe130744-6a63-4f7e-853c-05cc7c79e993 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f
	Nov 01 09:31:42 no-preload-357229 conmon[1731]: conmon 7831ce5b5d1470589fdb <ninfo>: container 1733 exited with status 1
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.14320455Z" level=info msg="Removing container: 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.15390251Z" level=info msg="Error loading conmon cgroup of container 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b: cgroup deleted" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:31:43 no-preload-357229 crio[652]: time="2025-11-01T09:31:43.157149812Z" level=info msg="Removed container 2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx/dashboard-metrics-scraper" id=171650c1-b544-4b82-b71b-f4ba04bdf50e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7831ce5b5d147       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   f204861758bef       dashboard-metrics-scraper-6ffb444bf9-6h7gx   kubernetes-dashboard
	9de81cbcd3c30       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   0bfb4bbc09a4e       storage-provisioner                          kube-system
	c30acba841023       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   f995f1f4e16b7       kubernetes-dashboard-855c9754f9-r6mtl        kubernetes-dashboard
	1438c362f8e0c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   19081e3fce3e7       coredns-66bc5c9577-txw5s                     kube-system
	8bd8813a30ca1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   7ad6e33aa48b3       busybox                                      default
	59ba904954f25       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   0bfb4bbc09a4e       storage-provisioner                          kube-system
	771f2b304e60c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   dcc8d5b7566a7       kindnet-lxlsh                                kube-system
	976f4eb8d6c55       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   fb2bf5742da24       kube-proxy-2mqtw                             kube-system
	66bc439562b33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   97e5aebeadcd4       kube-apiserver-no-preload-357229             kube-system
	68f7fcc91b8be       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   07a3db5ad95af       etcd-no-preload-357229                       kube-system
	ae5869cf712bf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ed342b3f7651d       kube-controller-manager-no-preload-357229    kube-system
	e3948966e2df7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bcd0c46a12b45       kube-scheduler-no-preload-357229             kube-system
	
	
	==> coredns [1438c362f8e0c3aa8b9a453bd141bf368984df37861d76c34aa0000a95c7a3b1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47095 - 1105 "HINFO IN 1393672097041498670.3145267040789559047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050925308s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-357229
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-357229
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-357229
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-357229
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:31:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:29:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:31:24 +0000   Sat, 01 Nov 2025 09:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-357229
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7d552ba1-c6de-4d90-ae3f-74806a4aebb4
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-txw5s                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-no-preload-357229                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-lxlsh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-357229              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-357229     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-2mqtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-357229              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6h7gx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-r6mtl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m1s                   kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s                   kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m1s                   kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           117s                   node-controller  Node no-preload-357229 event: Registered Node no-preload-357229 in Controller
	  Normal   NodeReady                99s                    kubelet          Node no-preload-357229 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)      kubelet          Node no-preload-357229 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)      kubelet          Node no-preload-357229 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)      kubelet          Node no-preload-357229 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node no-preload-357229 event: Registered Node no-preload-357229 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:10] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:11] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [68f7fcc91b8befc150b5fb790881da1ad70f3bfe9fa8eb19146693bc1a766b36] <==
	{"level":"warn","ts":"2025-11-01T09:30:51.567473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.585317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.617753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.636951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.653588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.675333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.694194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.708718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.724444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.760186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.769638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.801381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.820771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.827637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.850334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.869614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.885981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.908957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.919140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.930472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.953331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:51.984469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.000561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.014524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:52.123354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:31:53 up 18:14,  0 user,  load average: 2.21, 3.24, 2.93
	Linux no-preload-357229 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [771f2b304e60c1b1d5959ab96ba0831e10eea22c8ac40a0169fea3da6d8acaba] <==
	I1101 09:30:55.052600       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:30:55.058775       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:30:55.059025       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:30:55.059140       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:30:55.059188       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:30:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:30:55.354245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:30:55.354310       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:30:55.354341       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:30:55.354754       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:31:25.355253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:31:25.355382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:31:25.355436       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:31:25.355454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:31:26.854556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:31:26.854593       1 metrics.go:72] Registering metrics
	I1101 09:31:26.854647       1 controller.go:711] "Syncing nftables rules"
	I1101 09:31:35.355746       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:31:35.355780       1 main.go:301] handling current node
	I1101 09:31:45.354930       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:31:45.354974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66bc439562b33aee6bf209a9e922684969e9f8205826ecb76d4a5f42eff5e976] <==
	I1101 09:30:53.127482       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:30:53.127498       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:30:53.133472       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:30:53.133530       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:30:53.133568       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:30:53.134092       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:30:53.158256       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:30:53.172011       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:30:53.172034       1 policy_source.go:240] refreshing policies
	I1101 09:30:53.195449       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:30:53.195809       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:30:53.201801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:30:53.209831       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:30:53.222900       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:30:53.712459       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:30:53.782726       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:30:53.834508       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:30:53.873128       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:30:53.893417       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:30:53.910964       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:30:53.988342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.34.75"}
	I1101 09:30:54.057787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.222.228"}
	I1101 09:30:56.535211       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:30:56.834699       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:30:56.884406       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ae5869cf712bf7909c67aaf8a14f0be1a3ace2ea33f2b9abc08d3e78149e156f] <==
	I1101 09:30:56.398483       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:30:56.398698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:56.401359       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:30:56.406103       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:30:56.406633       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:30:56.410254       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:30:56.413484       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:30:56.418815       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:30:56.418917       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:30:56.420315       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:30:56.428779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:56.428873       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:30:56.428904       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:30:56.429013       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:30:56.430131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:30:56.430204       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:30:56.430295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:56.433937       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:30:56.446371       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:30:56.452530       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:30:56.452633       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:30:56.452725       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-357229"
	I1101 09:30:56.452780       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:30:56.462958       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:30:56.472071       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [976f4eb8d6c55dd3e124804e23e125f941974fcaab7c4d5e5d0326c26e5c577e] <==
	I1101 09:30:55.157183       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:30:55.241791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:30:55.344219       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:30:55.344250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:30:55.344324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:30:55.366278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:30:55.366347       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:30:55.370267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:30:55.370589       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:30:55.370612       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:55.372424       1 config.go:200] "Starting service config controller"
	I1101 09:30:55.372444       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:30:55.372462       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:30:55.372466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:30:55.372477       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:30:55.372481       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:30:55.373350       1 config.go:309] "Starting node config controller"
	I1101 09:30:55.373368       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:30:55.373376       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:30:55.472761       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:30:55.472794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:30:55.472824       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3948966e2df765c7a12e39bf7465a601cc905044915c5c42848f542f11cee90] <==
	W1101 09:30:52.819036       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:30:52.819076       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:30:52.819093       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:30:52.819100       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:30:53.033775       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:30:53.035192       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:53.044271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:53.044406       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:53.044871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:30:53.044897       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:30:53.074238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:30:53.099675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:30:53.099760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:30:53.099823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:30:53.100566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:30:53.100654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:30:53.100727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:30:53.100788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:30:53.100848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:30:53.100900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:30:53.100943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:30:53.100984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:30:53.101030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:30:53.101110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1101 09:30:54.647344       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:30:57 no-preload-357229 kubelet[770]: W1101 09:30:57.390421     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6863b4e551e28c0cdf28394d17eb7fbc923c22a70b5222d3093502d55d1412b9/crio-f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f WatchSource:0}: Error finding container f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f: Status 404 returned error can't find the container with id f204861758bef3a9d8cba2bac125be4bcb7f55c3b8ad18d2e1ceef8bcd09a80f
	Nov 01 09:31:03 no-preload-357229 kubelet[770]: I1101 09:31:03.079953     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:31:05 no-preload-357229 kubelet[770]: I1101 09:31:05.997678     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-r6mtl" podStartSLOduration=5.047874158 podStartE2EDuration="9.997660949s" podCreationTimestamp="2025-11-01 09:30:56 +0000 UTC" firstStartedPulling="2025-11-01 09:30:57.38042059 +0000 UTC m=+9.669402289" lastFinishedPulling="2025-11-01 09:31:02.330207365 +0000 UTC m=+14.619189080" observedRunningTime="2025-11-01 09:31:03.034445589 +0000 UTC m=+15.323427288" watchObservedRunningTime="2025-11-01 09:31:05.997660949 +0000 UTC m=+18.286642648"
	Nov 01 09:31:07 no-preload-357229 kubelet[770]: I1101 09:31:07.023473     770 scope.go:117] "RemoveContainer" containerID="702b00bea8f9d2b54c7d8bf363e115ef79eb32ef5258ab8d135784894800e292"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: I1101 09:31:08.029085     770 scope.go:117] "RemoveContainer" containerID="702b00bea8f9d2b54c7d8bf363e115ef79eb32ef5258ab8d135784894800e292"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: I1101 09:31:08.029409     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:08 no-preload-357229 kubelet[770]: E1101 09:31:08.029550     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:09 no-preload-357229 kubelet[770]: I1101 09:31:09.034927     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:09 no-preload-357229 kubelet[770]: E1101 09:31:09.035572     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:18 no-preload-357229 kubelet[770]: I1101 09:31:18.004125     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: I1101 09:31:19.069212     770 scope.go:117] "RemoveContainer" containerID="45c20ce4c11a7093b10ce14c8d93af5267a7428425a4b9d0bb720c22be1c49f9"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: I1101 09:31:19.070149     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:19 no-preload-357229 kubelet[770]: E1101 09:31:19.070305     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:26 no-preload-357229 kubelet[770]: I1101 09:31:26.096456     770 scope.go:117] "RemoveContainer" containerID="59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926"
	Nov 01 09:31:28 no-preload-357229 kubelet[770]: I1101 09:31:28.004150     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:28 no-preload-357229 kubelet[770]: E1101 09:31:28.005002     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:42 no-preload-357229 kubelet[770]: I1101 09:31:42.919236     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: I1101 09:31:43.141552     770 scope.go:117] "RemoveContainer" containerID="2efc3d3e3080ca6226b6aaf5cb2de97ef1419913a2e93f2376cf37ac473a2b5b"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: I1101 09:31:43.141738     770 scope.go:117] "RemoveContainer" containerID="7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	Nov 01 09:31:43 no-preload-357229 kubelet[770]: E1101 09:31:43.141917     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:48 no-preload-357229 kubelet[770]: I1101 09:31:48.004415     770 scope.go:117] "RemoveContainer" containerID="7831ce5b5d1470589fdbbeb9fdb8a7764d12c7cb6003155e5fb0e944dc762564"
	Nov 01 09:31:48 no-preload-357229 kubelet[770]: E1101 09:31:48.005304     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6h7gx_kubernetes-dashboard(f795b6cf-e512-4595-b92d-209e691a437d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6h7gx" podUID="f795b6cf-e512-4595-b92d-209e691a437d"
	Nov 01 09:31:48 no-preload-357229 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:31:48 no-preload-357229 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:31:48 no-preload-357229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c30acba8410231f1908e91c30349b403dc6788563c3ff2988167f6eb869003eb] <==
	2025/11/01 09:31:02 Using namespace: kubernetes-dashboard
	2025/11/01 09:31:02 Using in-cluster config to connect to apiserver
	2025/11/01 09:31:02 Using secret token for csrf signing
	2025/11/01 09:31:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:31:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:31:02 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:31:02 Generating JWE encryption key
	2025/11/01 09:31:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:31:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:31:03 Initializing JWE encryption key from synchronized object
	2025/11/01 09:31:03 Creating in-cluster Sidecar client
	2025/11/01 09:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:31:03 Serving insecurely on HTTP port: 9090
	2025/11/01 09:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:31:02 Starting overwatch
	
	
	==> storage-provisioner [59ba904954f2515bbb33a39a736732a91955b5d518b16f073abeccaa6d6aa926] <==
	I1101 09:30:55.121985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:31:25.126618       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9de81cbcd3c30c458328c720dd937efe1790c7a91522e7bad9fcd94e49c9d97d] <==
	I1101 09:31:26.158380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:31:26.159352       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:31:26.162032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.618977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.878650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.476783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:40.529987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.552596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.561353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:43.561596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:31:43.561826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd!
	I1101 09:31:43.562517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d74d4812-9df3-4278-ac4b-f8343a00c004", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd became leader
	W1101 09:31:43.567663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.572903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:31:43.662266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-357229_9cdb6427-5313-4fc9-b065-24ca94c49cdd!
	W1101 09:31:45.575411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.579782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:47.586103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:47.592272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:49.595334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:49.599689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:51.603173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:51.608569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:53.611737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:53.620951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-357229 -n no-preload-357229
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-357229 -n no-preload-357229: exit status 2 (396.116139ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-357229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-312549 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-312549 --alsologtostderr -v=1: exit status 80 (2.55692674s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-312549 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:33:02.300431 2512057 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:02.300630 2512057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:02.300669 2512057 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:02.300691 2512057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:02.301079 2512057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:33:02.301450 2512057 out.go:368] Setting JSON to false
	I1101 09:33:02.301508 2512057 mustload.go:66] Loading cluster: embed-certs-312549
	I1101 09:33:02.302205 2512057 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:02.303181 2512057 cli_runner.go:164] Run: docker container inspect embed-certs-312549 --format={{.State.Status}}
	I1101 09:33:02.324468 2512057 host.go:66] Checking if "embed-certs-312549" exists ...
	I1101 09:33:02.324922 2512057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:02.389324 2512057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 09:33:02.379983078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:02.390085 2512057 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-312549 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:33:02.393566 2512057 out.go:179] * Pausing node embed-certs-312549 ... 
	I1101 09:33:02.397438 2512057 host.go:66] Checking if "embed-certs-312549" exists ...
	I1101 09:33:02.397841 2512057 ssh_runner.go:195] Run: systemctl --version
	I1101 09:33:02.397913 2512057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:33:02.417410 2512057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:33:02.527743 2512057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:33:02.542186 2512057 pause.go:52] kubelet running: true
	I1101 09:33:02.542272 2512057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:33:02.819996 2512057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:33:02.820084 2512057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:33:02.896031 2512057 cri.go:89] found id: "88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7"
	I1101 09:33:02.896056 2512057 cri.go:89] found id: "3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456"
	I1101 09:33:02.896061 2512057 cri.go:89] found id: "b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad"
	I1101 09:33:02.896065 2512057 cri.go:89] found id: "94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	I1101 09:33:02.896068 2512057 cri.go:89] found id: "2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4"
	I1101 09:33:02.896071 2512057 cri.go:89] found id: "416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed"
	I1101 09:33:02.896074 2512057 cri.go:89] found id: "ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107"
	I1101 09:33:02.896077 2512057 cri.go:89] found id: "830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d"
	I1101 09:33:02.896080 2512057 cri.go:89] found id: "680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa"
	I1101 09:33:02.896087 2512057 cri.go:89] found id: "4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	I1101 09:33:02.896097 2512057 cri.go:89] found id: "72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a"
	I1101 09:33:02.896100 2512057 cri.go:89] found id: ""
	I1101 09:33:02.896151 2512057 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:33:02.907012 2512057 retry.go:31] will retry after 211.367501ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:33:03.119480 2512057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:33:03.133804 2512057 pause.go:52] kubelet running: false
	I1101 09:33:03.133865 2512057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:33:03.322286 2512057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:33:03.322366 2512057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:33:03.398408 2512057 cri.go:89] found id: "88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7"
	I1101 09:33:03.398429 2512057 cri.go:89] found id: "3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456"
	I1101 09:33:03.398434 2512057 cri.go:89] found id: "b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad"
	I1101 09:33:03.398438 2512057 cri.go:89] found id: "94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	I1101 09:33:03.398442 2512057 cri.go:89] found id: "2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4"
	I1101 09:33:03.398446 2512057 cri.go:89] found id: "416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed"
	I1101 09:33:03.398449 2512057 cri.go:89] found id: "ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107"
	I1101 09:33:03.398452 2512057 cri.go:89] found id: "830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d"
	I1101 09:33:03.398455 2512057 cri.go:89] found id: "680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa"
	I1101 09:33:03.398471 2512057 cri.go:89] found id: "4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	I1101 09:33:03.398480 2512057 cri.go:89] found id: "72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a"
	I1101 09:33:03.398487 2512057 cri.go:89] found id: ""
	I1101 09:33:03.398537 2512057 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:33:03.410082 2512057 retry.go:31] will retry after 403.165574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:33:03.813674 2512057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:33:03.827362 2512057 pause.go:52] kubelet running: false
	I1101 09:33:03.827424 2512057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:33:03.995965 2512057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:33:03.996084 2512057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:33:04.068727 2512057 cri.go:89] found id: "88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7"
	I1101 09:33:04.068751 2512057 cri.go:89] found id: "3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456"
	I1101 09:33:04.068756 2512057 cri.go:89] found id: "b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad"
	I1101 09:33:04.068760 2512057 cri.go:89] found id: "94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	I1101 09:33:04.068764 2512057 cri.go:89] found id: "2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4"
	I1101 09:33:04.068768 2512057 cri.go:89] found id: "416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed"
	I1101 09:33:04.068772 2512057 cri.go:89] found id: "ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107"
	I1101 09:33:04.068775 2512057 cri.go:89] found id: "830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d"
	I1101 09:33:04.068810 2512057 cri.go:89] found id: "680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa"
	I1101 09:33:04.068824 2512057 cri.go:89] found id: "4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	I1101 09:33:04.068828 2512057 cri.go:89] found id: "72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a"
	I1101 09:33:04.068831 2512057 cri.go:89] found id: ""
	I1101 09:33:04.068900 2512057 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:33:04.080297 2512057 retry.go:31] will retry after 419.388378ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:04Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:33:04.499923 2512057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:33:04.512858 2512057 pause.go:52] kubelet running: false
	I1101 09:33:04.512932 2512057 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:33:04.684372 2512057 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:33:04.684487 2512057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:33:04.758524 2512057 cri.go:89] found id: "88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7"
	I1101 09:33:04.758545 2512057 cri.go:89] found id: "3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456"
	I1101 09:33:04.758550 2512057 cri.go:89] found id: "b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad"
	I1101 09:33:04.758554 2512057 cri.go:89] found id: "94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	I1101 09:33:04.758557 2512057 cri.go:89] found id: "2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4"
	I1101 09:33:04.758561 2512057 cri.go:89] found id: "416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed"
	I1101 09:33:04.758564 2512057 cri.go:89] found id: "ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107"
	I1101 09:33:04.758567 2512057 cri.go:89] found id: "830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d"
	I1101 09:33:04.758570 2512057 cri.go:89] found id: "680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa"
	I1101 09:33:04.758580 2512057 cri.go:89] found id: "4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	I1101 09:33:04.758584 2512057 cri.go:89] found id: "72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a"
	I1101 09:33:04.758587 2512057 cri.go:89] found id: ""
	I1101 09:33:04.758641 2512057 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:33:04.773114 2512057 out.go:203] 
	W1101 09:33:04.775979 2512057 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:33:04.776006 2512057 out.go:285] * 
	* 
	W1101 09:33:04.788379 2512057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:33:04.791204 2512057 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-312549 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-312549
helpers_test.go:243: (dbg) docker inspect embed-certs-312549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	        "Created": "2025-11-01T09:30:05.467452429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2506271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:31:48.438075887Z",
	            "FinishedAt": "2025-11-01T09:31:47.357326892Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hosts",
	        "LogPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6-json.log",
	        "Name": "/embed-certs-312549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-312549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-312549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	                "LowerDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-312549",
	                "Source": "/var/lib/docker/volumes/embed-certs-312549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-312549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-312549",
	                "name.minikube.sigs.k8s.io": "embed-certs-312549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23ac91cad5b064fc80037bb63bdba2775d89777855afce7df142b857656efb35",
	            "SandboxKey": "/var/run/docker/netns/23ac91cad5b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36360"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36361"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36364"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36362"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36363"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-312549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a0:cc:5d:83:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3dabe0b25d9c671a5a74ecef725675d174c55efcf863b93a552f738453017d3",
	                    "EndpointID": "9a1255b68c09baf350a1997e1df2b3060ed0bcea51dc8355d0a3e6afca4a0ea9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-312549",
	                        "46c884efd26a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549: exit status 2 (356.632095ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25: (1.348477908s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:31:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:31:58.493383 2508765 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:58.493489 2508765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:58.493495 2508765 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:58.493499 2508765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:58.493868 2508765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:31:58.494334 2508765 out.go:368] Setting JSON to false
	I1101 09:31:58.495315 2508765 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65664,"bootTime":1761923854,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:31:58.495397 2508765 start.go:143] virtualization:  
	I1101 09:31:58.515985 2508765 out.go:179] * [default-k8s-diff-port-703627] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:31:58.519129 2508765 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:31:58.520265 2508765 notify.go:221] Checking for updates...
	I1101 09:31:58.528868 2508765 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:31:58.531733 2508765 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:31:58.534563 2508765 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:31:58.537463 2508765 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:31:58.545161 2508765 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:31:58.552571 2508765 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:58.552698 2508765 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:31:58.642398 2508765 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:31:58.642524 2508765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:58.746040 2508765 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:58.736308945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:58.746146 2508765 docker.go:319] overlay module found
	I1101 09:31:58.749242 2508765 out.go:179] * Using the docker driver based on user configuration
	I1101 09:31:58.752151 2508765 start.go:309] selected driver: docker
	I1101 09:31:58.752172 2508765 start.go:930] validating driver "docker" against <nil>
	I1101 09:31:58.752185 2508765 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:31:58.752862 2508765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:58.874709 2508765 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:58.865210643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:58.874862 2508765 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:31:58.875080 2508765 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:31:58.877976 2508765 out.go:179] * Using Docker driver with root privileges
	I1101 09:31:58.880858 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:31:58.880923 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:31:58.880935 2508765 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:31:58.881032 2508765 start.go:353] cluster config:
	{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:58.884280 2508765 out.go:179] * Starting "default-k8s-diff-port-703627" primary control-plane node in "default-k8s-diff-port-703627" cluster
	I1101 09:31:58.887118 2508765 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:31:58.890097 2508765 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:31:58.893199 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:58.893270 2508765 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:31:58.893280 2508765 cache.go:59] Caching tarball of preloaded images
	I1101 09:31:58.893373 2508765 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:31:58.893382 2508765 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:31:58.893490 2508765 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json ...
	I1101 09:31:58.893510 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json: {Name:mk1d062a219f17dfe2538736f6c17f88855efbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:31:58.893664 2508765 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:31:58.917021 2508765 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:31:58.917042 2508765 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:31:58.917055 2508765 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:31:58.917091 2508765 start.go:360] acquireMachinesLock for default-k8s-diff-port-703627: {Name:mk723fbf5d77afd626dac1d43272d3636891d6fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:31:58.917191 2508765 start.go:364] duration metric: took 85.692µs to acquireMachinesLock for "default-k8s-diff-port-703627"
	I1101 09:31:58.917217 2508765 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:31:58.917291 2508765 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:31:58.490975 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:31:58.491006 2506068 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:31:58.491076 2506068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:31:58.533492 2506068 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:31:58.533519 2506068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:31:58.533593 2506068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:31:58.589127 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.600142 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.611563 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.879240 2506068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:31:58.926013 2506068 node_ready.go:35] waiting up to 6m0s for node "embed-certs-312549" to be "Ready" ...
	I1101 09:31:58.961052 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:31:58.961073 2506068 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:31:58.971307 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:31:59.018780 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:31:59.028119 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:31:59.028141 2506068 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:31:59.126013 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:31:59.126035 2506068 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:31:59.237818 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:31:59.237838 2506068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:31:59.317549 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:31:59.317570 2506068 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:31:59.356208 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:31:59.356236 2506068 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:31:59.384031 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:31:59.384052 2506068 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:31:59.417734 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:31:59.417761 2506068 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:31:59.481444 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:31:59.481469 2506068 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:31:59.507185 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:31:58.920719 2508765 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:31:58.920963 2508765 start.go:159] libmachine.API.Create for "default-k8s-diff-port-703627" (driver="docker")
	I1101 09:31:58.920992 2508765 client.go:173] LocalClient.Create starting
	I1101 09:31:58.921072 2508765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:31:58.921105 2508765 main.go:143] libmachine: Decoding PEM data...
	I1101 09:31:58.921118 2508765 main.go:143] libmachine: Parsing certificate...
	I1101 09:31:58.921169 2508765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:31:58.921185 2508765 main.go:143] libmachine: Decoding PEM data...
	I1101 09:31:58.921194 2508765 main.go:143] libmachine: Parsing certificate...
	I1101 09:31:58.921561 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:31:58.944328 2508765 cli_runner.go:211] docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:31:58.944412 2508765 network_create.go:284] running [docker network inspect default-k8s-diff-port-703627] to gather additional debugging logs...
	I1101 09:31:58.944428 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627
	W1101 09:31:58.969041 2508765 cli_runner.go:211] docker network inspect default-k8s-diff-port-703627 returned with exit code 1
	I1101 09:31:58.969069 2508765 network_create.go:287] error running [docker network inspect default-k8s-diff-port-703627]: docker network inspect default-k8s-diff-port-703627: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-703627 not found
	I1101 09:31:58.969081 2508765 network_create.go:289] output of [docker network inspect default-k8s-diff-port-703627]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-703627 not found
	
	** /stderr **
	I1101 09:31:58.969178 2508765 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:31:58.991566 2508765 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:31:58.991992 2508765 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:31:58.992364 2508765 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:31:58.992652 2508765 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e3dabe0b25d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:a6:6b:fa:dd:11} reservation:<nil>}
	I1101 09:31:58.993084 2508765 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c37f0}
	I1101 09:31:58.993110 2508765 network_create.go:124] attempt to create docker network default-k8s-diff-port-703627 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 09:31:58.993165 2508765 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 default-k8s-diff-port-703627
	I1101 09:31:59.081490 2508765 network_create.go:108] docker network default-k8s-diff-port-703627 192.168.85.0/24 created
	I1101 09:31:59.081537 2508765 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-703627" container
	I1101 09:31:59.081605 2508765 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:31:59.110085 2508765 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-703627 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:31:59.138633 2508765 oci.go:103] Successfully created a docker volume default-k8s-diff-port-703627
	I1101 09:31:59.138731 2508765 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-703627-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --entrypoint /usr/bin/test -v default-k8s-diff-port-703627:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:31:59.902197 2508765 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-703627
	I1101 09:31:59.902242 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:59.902262 2508765 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:31:59.902335 2508765 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-703627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:32:06.091502 2506068 node_ready.go:49] node "embed-certs-312549" is "Ready"
	I1101 09:32:06.091530 2506068 node_ready.go:38] duration metric: took 7.165465648s for node "embed-certs-312549" to be "Ready" ...
	I1101 09:32:06.091544 2506068 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:32:06.091606 2506068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:32:07.987723 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.016385556s)
	I1101 09:32:07.987779 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.968980766s)
	I1101 09:32:07.988160 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.480946857s)
	I1101 09:32:07.988925 2506068 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.897306274s)
	I1101 09:32:07.988946 2506068 api_server.go:72] duration metric: took 9.635448613s to wait for apiserver process to appear ...
	I1101 09:32:07.988952 2506068 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:32:07.988966 2506068 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:32:07.991971 2506068 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-312549 addons enable metrics-server
	
	I1101 09:32:08.002717 2506068 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:32:08.005296 2506068 api_server.go:141] control plane version: v1.34.1
	I1101 09:32:08.005331 2506068 api_server.go:131] duration metric: took 16.371649ms to wait for apiserver health ...
	I1101 09:32:08.005342 2506068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:32:08.011923 2506068 system_pods.go:59] 8 kube-system pods found
	I1101 09:32:08.011956 2506068 system_pods.go:61] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:32:08.011965 2506068 system_pods.go:61] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:32:08.011973 2506068 system_pods.go:61] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:32:08.011980 2506068 system_pods.go:61] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:32:08.011987 2506068 system_pods.go:61] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:32:08.011992 2506068 system_pods.go:61] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:32:08.012000 2506068 system_pods.go:61] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:32:08.012004 2506068 system_pods.go:61] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Running
	I1101 09:32:08.012010 2506068 system_pods.go:74] duration metric: took 6.662068ms to wait for pod list to return data ...
	I1101 09:32:08.012017 2506068 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:32:08.014179 2506068 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:32:08.015492 2506068 default_sa.go:45] found service account: "default"
	I1101 09:32:08.015511 2506068 default_sa.go:55] duration metric: took 3.48906ms for default service account to be created ...
	I1101 09:32:08.015520 2506068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:32:08.016997 2506068 addons.go:515] duration metric: took 9.663189439s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:32:08.019944 2506068 system_pods.go:86] 8 kube-system pods found
	I1101 09:32:08.020029 2506068 system_pods.go:89] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:32:08.020055 2506068 system_pods.go:89] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:32:08.020090 2506068 system_pods.go:89] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:32:08.020118 2506068 system_pods.go:89] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:32:08.020141 2506068 system_pods.go:89] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:32:08.020174 2506068 system_pods.go:89] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:32:08.020200 2506068 system_pods.go:89] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:32:08.020231 2506068 system_pods.go:89] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Running
	I1101 09:32:08.020268 2506068 system_pods.go:126] duration metric: took 4.741677ms to wait for k8s-apps to be running ...
	I1101 09:32:08.020296 2506068 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:32:08.020391 2506068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:32:08.036105 2506068 system_svc.go:56] duration metric: took 15.801351ms WaitForService to wait for kubelet
	I1101 09:32:08.036130 2506068 kubeadm.go:587] duration metric: took 9.682630672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:32:08.036150 2506068 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:32:08.043070 2506068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:32:08.043166 2506068 node_conditions.go:123] node cpu capacity is 2
	I1101 09:32:08.043194 2506068 node_conditions.go:105] duration metric: took 7.037409ms to run NodePressure ...
	I1101 09:32:08.043245 2506068 start.go:242] waiting for startup goroutines ...
	I1101 09:32:08.043272 2506068 start.go:247] waiting for cluster config update ...
	I1101 09:32:08.043301 2506068 start.go:256] writing updated cluster config ...
	I1101 09:32:08.043696 2506068 ssh_runner.go:195] Run: rm -f paused
	I1101 09:32:08.047992 2506068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:32:08.053436 2506068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:04.752216 2508765 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-703627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.849846771s)
	I1101 09:32:04.752249 2508765 kic.go:203] duration metric: took 4.849983227s to extract preloaded images to volume ...
	W1101 09:32:04.752392 2508765 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:32:04.752494 2508765 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:32:04.876293 2508765 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-703627 --name default-k8s-diff-port-703627 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --network default-k8s-diff-port-703627 --ip 192.168.85.2 --volume default-k8s-diff-port-703627:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:32:05.301050 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Running}}
	I1101 09:32:05.326569 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.352504 2508765 cli_runner.go:164] Run: docker exec default-k8s-diff-port-703627 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:32:05.429416 2508765 oci.go:144] the created container "default-k8s-diff-port-703627" has a running status.
	I1101 09:32:05.429451 2508765 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa...
	I1101 09:32:05.690250 2508765 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:32:05.734124 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.766584 2508765 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:32:05.766602 2508765 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-703627 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:32:05.836135 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.868210 2508765 machine.go:94] provisionDockerMachine start ...
	I1101 09:32:05.868316 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:05.895706 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:05.896107 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:05.896124 2508765 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:32:05.896653 2508765 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53610->127.0.0.1:36365: read: connection reset by peer
	I1101 09:32:09.058915 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-703627
	
	I1101 09:32:09.058941 2508765 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-703627"
	I1101 09:32:09.059016 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.081461 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:09.081774 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:09.081786 2508765 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-703627 && echo "default-k8s-diff-port-703627" | sudo tee /etc/hostname
	I1101 09:32:09.241022 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-703627
	
	I1101 09:32:09.241092 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.269970 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:09.270310 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:09.270338 2508765 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-703627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-703627/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-703627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:32:09.435968 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:32:09.436033 2508765 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:32:09.436065 2508765 ubuntu.go:190] setting up certificates
	I1101 09:32:09.436089 2508765 provision.go:84] configureAuth start
	I1101 09:32:09.436177 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:09.458157 2508765 provision.go:143] copyHostCerts
	I1101 09:32:09.458220 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:32:09.458229 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:32:09.458306 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:32:09.458397 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:32:09.458402 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:32:09.458428 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:32:09.458481 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:32:09.458485 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:32:09.458522 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:32:09.458569 2508765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-703627 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-703627 localhost minikube]
	I1101 09:32:09.861321 2508765 provision.go:177] copyRemoteCerts
	I1101 09:32:09.861430 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:32:09.861486 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.879067 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:09.988432 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:32:10.013092 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:32:10.046954 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:32:10.082035 2508765 provision.go:87] duration metric: took 645.891571ms to configureAuth
	I1101 09:32:10.082115 2508765 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:32:10.082365 2508765 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:10.082542 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.107838 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:10.108180 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:10.108194 2508765 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:32:10.429928 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:32:10.429950 2508765 machine.go:97] duration metric: took 4.56171418s to provisionDockerMachine
	I1101 09:32:10.429968 2508765 client.go:176] duration metric: took 11.508961657s to LocalClient.Create
	I1101 09:32:10.429982 2508765 start.go:167] duration metric: took 11.509020774s to libmachine.API.Create "default-k8s-diff-port-703627"
	I1101 09:32:10.429990 2508765 start.go:293] postStartSetup for "default-k8s-diff-port-703627" (driver="docker")
	I1101 09:32:10.430001 2508765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:32:10.430063 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:32:10.430109 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.448257 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.554105 2508765 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:32:10.559547 2508765 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:32:10.559573 2508765 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:32:10.559583 2508765 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:32:10.559654 2508765 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:32:10.559751 2508765 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:32:10.559892 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:32:10.569138 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:32:10.588247 2508765 start.go:296] duration metric: took 158.242356ms for postStartSetup
	I1101 09:32:10.588667 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:10.605395 2508765 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json ...
	I1101 09:32:10.605666 2508765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:32:10.605713 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.622593 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.724894 2508765 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:32:10.729412 2508765 start.go:128] duration metric: took 11.812106507s to createHost
	I1101 09:32:10.729441 2508765 start.go:83] releasing machines lock for "default-k8s-diff-port-703627", held for 11.812240509s
	I1101 09:32:10.729524 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:10.746638 2508765 ssh_runner.go:195] Run: cat /version.json
	I1101 09:32:10.746688 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.746712 2508765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:32:10.746777 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.781500 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.793314 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.887627 2508765 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:10.980224 2508765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:32:11.018175 2508765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:32:11.022486 2508765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:32:11.022559 2508765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:32:11.052538 2508765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:32:11.052571 2508765 start.go:496] detecting cgroup driver to use...
	I1101 09:32:11.052622 2508765 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:32:11.052718 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:32:11.071196 2508765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:32:11.085059 2508765 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:32:11.085137 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:32:11.104621 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:32:11.125006 2508765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:32:11.255955 2508765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:32:11.376102 2508765 docker.go:234] disabling docker service ...
	I1101 09:32:11.376171 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:32:11.396531 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:32:11.410077 2508765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:32:11.527631 2508765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:32:11.650826 2508765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:32:11.663374 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:32:11.677277 2508765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:32:11.677390 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.685814 2508765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:32:11.685878 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.694241 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.702778 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.711525 2508765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:32:11.719974 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.728362 2508765 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.740826 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.749756 2508765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:32:11.757062 2508765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:32:11.764039 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:11.903360 2508765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:32:12.062095 2508765 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:32:12.062164 2508765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:32:12.066089 2508765 start.go:564] Will wait 60s for crictl version
	I1101 09:32:12.066153 2508765 ssh_runner.go:195] Run: which crictl
	I1101 09:32:12.069715 2508765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:32:12.097157 2508765 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:32:12.097250 2508765 ssh_runner.go:195] Run: crio --version
	I1101 09:32:12.129509 2508765 ssh_runner.go:195] Run: crio --version
	I1101 09:32:12.171118 2508765 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:32:10.124738 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:12.564882 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:12.174180 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:32:12.197082 2508765 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:32:12.202284 2508765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:32:12.213992 2508765 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:32:12.214111 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:32:12.214171 2508765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:32:12.259654 2508765 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:32:12.259681 2508765 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:32:12.259738 2508765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:32:12.307258 2508765 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:32:12.307285 2508765 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:32:12.307294 2508765 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 09:32:12.307378 2508765 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-703627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:32:12.307455 2508765 ssh_runner.go:195] Run: crio config
	I1101 09:32:12.380211 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:32:12.380241 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:32:12.380260 2508765 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:32:12.380282 2508765 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-703627 NodeName:default-k8s-diff-port-703627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:32:12.380432 2508765 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-703627"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:32:12.380503 2508765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:32:12.394231 2508765 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:32:12.394296 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:32:12.404750 2508765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:32:12.420527 2508765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:32:12.436262 2508765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 09:32:12.454091 2508765 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:32:12.460241 2508765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:32:12.472668 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:12.658557 2508765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:32:12.684402 2508765 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627 for IP: 192.168.85.2
	I1101 09:32:12.684483 2508765 certs.go:195] generating shared ca certs ...
	I1101 09:32:12.684550 2508765 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.684783 2508765 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:32:12.684844 2508765 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:32:12.684851 2508765 certs.go:257] generating profile certs ...
	I1101 09:32:12.684913 2508765 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key
	I1101 09:32:12.684929 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt with IP's: []
	I1101 09:32:12.860062 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt ...
	I1101 09:32:12.860096 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: {Name:mk2c762d23e021a8c8564f6a1b66b779c7bdaa56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.860315 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key ...
	I1101 09:32:12.860339 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key: {Name:mkda4c695ebe1c439a322831ac1341a4575dc783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.860451 2508765 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36
	I1101 09:32:12.860471 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 09:32:13.432339 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 ...
	I1101 09:32:13.432408 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36: {Name:mkfc59b1ebc5f85eccf6c67497b70990b3cc7f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:13.432650 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36 ...
	I1101 09:32:13.432685 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36: {Name:mkbc11b514f668962e85580866dcccd9f9140198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:13.432833 2508765 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt
	I1101 09:32:13.432979 2508765 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key
	I1101 09:32:13.433085 2508765 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key
	I1101 09:32:13.433122 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt with IP's: []
	I1101 09:32:15.557685 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt ...
	I1101 09:32:15.557765 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt: {Name:mk7141a4976d8e6cc10566e85902faa6bb78821a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:15.558001 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key ...
	I1101 09:32:15.558039 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key: {Name:mk5f34ac4f623371024f2515e4c4835a7ef41854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:15.558385 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:32:15.558461 2508765 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:32:15.558504 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:32:15.558555 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:32:15.558615 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:32:15.558663 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:32:15.558744 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:32:15.560740 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:32:15.589665 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:32:15.610287 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:32:15.630832 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:32:15.661426 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:32:15.681495 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:32:15.701212 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:32:15.722720 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:32:15.750617 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:32:15.786842 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:32:15.818236 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:32:15.847677 2508765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:32:15.860802 2508765 ssh_runner.go:195] Run: openssl version
	I1101 09:32:15.867308 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:32:15.875421 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.879971 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.880044 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.923493 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:32:15.931919 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:32:15.940541 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.944856 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.944941 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.987818 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:32:15.997577 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:32:16.017618 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.023426 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.023509 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.102053 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:32:16.111485 2508765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:32:16.116264 2508765 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:32:16.116336 2508765 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:32:16.116434 2508765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:16.116506 2508765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:16.154361 2508765 cri.go:89] found id: ""
	I1101 09:32:16.154453 2508765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:32:16.165484 2508765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:32:16.174667 2508765 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:32:16.174743 2508765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:32:16.186930 2508765 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:32:16.186954 2508765 kubeadm.go:158] found existing configuration files:
	
	I1101 09:32:16.187022 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 09:32:16.196488 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:32:16.196564 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:32:16.204889 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 09:32:16.215035 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:32:16.215117 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:32:16.223933 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 09:32:16.233289 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:32:16.233371 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:32:16.241658 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 09:32:16.250123 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:32:16.250195 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:32:16.258339 2508765 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:32:16.345859 2508765 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:32:16.346144 2508765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:32:16.392228 2508765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:32:16.392545 2508765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:32:16.392594 2508765 kubeadm.go:319] OS: Linux
	I1101 09:32:16.392662 2508765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:32:16.392725 2508765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:32:16.392787 2508765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:32:16.392846 2508765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:32:16.392900 2508765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:32:16.392961 2508765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:32:16.393019 2508765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:32:16.393079 2508765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:32:16.393131 2508765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:32:16.474747 2508765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:32:16.474880 2508765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:32:16.474992 2508765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:32:16.484807 2508765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 09:32:15.059775 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:17.559844 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:16.490760 2508765 out.go:252]   - Generating certificates and keys ...
	I1101 09:32:16.490866 2508765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:32:16.490946 2508765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:32:17.039684 2508765 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:32:17.783842 2508765 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:32:18.452093 2508765 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	W1101 09:32:19.564466 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:22.061016 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:19.624060 2508765 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:32:19.904214 2508765 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:32:19.904828 2508765 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-703627 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:32:19.975375 2508765 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:32:19.975970 2508765 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-703627 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:32:20.143528 2508765 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:32:21.457290 2508765 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:32:21.660745 2508765 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:32:21.661255 2508765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:32:23.130838 2508765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:32:23.420192 2508765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:32:23.913968 2508765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:32:24.323713 2508765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:32:25.165715 2508765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:32:25.166908 2508765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:32:25.169922 2508765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 09:32:24.061351 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:26.560877 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:25.175491 2508765 out.go:252]   - Booting up control plane ...
	I1101 09:32:25.175603 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:32:25.175725 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:32:25.175891 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:32:25.190506 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:32:25.190619 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:32:25.199100 2508765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:32:25.199788 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:32:25.200204 2508765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:32:25.330580 2508765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:32:25.330705 2508765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:32:26.331005 2508765 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000693161s
	I1101 09:32:26.343370 2508765 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:32:26.343488 2508765 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1101 09:32:26.343588 2508765 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:32:26.343676 2508765 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 09:32:29.059141 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:31.059277 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:29.514585 2508765 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.170629236s
	I1101 09:32:32.627510 2508765 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.284120701s
	I1101 09:32:34.345853 2508765 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002313817s
	I1101 09:32:34.366815 2508765 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:32:34.381347 2508765 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:32:34.396523 2508765 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:32:34.396830 2508765 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-703627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:32:34.408731 2508765 kubeadm.go:319] [bootstrap-token] Using token: rg7jt1.ljre0wz8jdt44ha8
	I1101 09:32:34.411646 2508765 out.go:252]   - Configuring RBAC rules ...
	I1101 09:32:34.411780 2508765 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:32:34.417667 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:32:34.428168 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:32:34.432260 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:32:34.436804 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:32:34.441110 2508765 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:32:34.755552 2508765 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:32:35.215951 2508765 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:32:35.754165 2508765 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:32:35.755546 2508765 kubeadm.go:319] 
	I1101 09:32:35.755634 2508765 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:32:35.755645 2508765 kubeadm.go:319] 
	I1101 09:32:35.755734 2508765 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:32:35.755743 2508765 kubeadm.go:319] 
	I1101 09:32:35.755773 2508765 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:32:35.755838 2508765 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:32:35.755947 2508765 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:32:35.755960 2508765 kubeadm.go:319] 
	I1101 09:32:35.756016 2508765 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:32:35.756020 2508765 kubeadm.go:319] 
	I1101 09:32:35.756074 2508765 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:32:35.756078 2508765 kubeadm.go:319] 
	I1101 09:32:35.756130 2508765 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:32:35.756205 2508765 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:32:35.756273 2508765 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:32:35.756278 2508765 kubeadm.go:319] 
	I1101 09:32:35.756361 2508765 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:32:35.756438 2508765 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:32:35.756443 2508765 kubeadm.go:319] 
	I1101 09:32:35.756527 2508765 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token rg7jt1.ljre0wz8jdt44ha8 \
	I1101 09:32:35.756630 2508765 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:32:35.756657 2508765 kubeadm.go:319] 	--control-plane 
	I1101 09:32:35.756663 2508765 kubeadm.go:319] 
	I1101 09:32:35.756747 2508765 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:32:35.756751 2508765 kubeadm.go:319] 
	I1101 09:32:35.756833 2508765 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token rg7jt1.ljre0wz8jdt44ha8 \
	I1101 09:32:35.756934 2508765 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:32:35.760694 2508765 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:32:35.760927 2508765 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:32:35.761056 2508765 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:32:35.761079 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:32:35.761087 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:32:35.764257 2508765 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:32:33.559557 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:36.060136 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:35.767071 2508765 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:32:35.771810 2508765 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:32:35.771831 2508765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:32:35.785930 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:32:36.180103 2508765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:32:36.180256 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:36.180334 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-703627 minikube.k8s.io/updated_at=2025_11_01T09_32_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=default-k8s-diff-port-703627 minikube.k8s.io/primary=true
	I1101 09:32:36.397143 2508765 ops.go:34] apiserver oom_adj: -16
	I1101 09:32:36.397258 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:36.897530 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:37.398010 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:37.898302 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:38.397315 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:38.897310 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.397560 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.897447 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.988503 2508765 kubeadm.go:1114] duration metric: took 3.808291297s to wait for elevateKubeSystemPrivileges
	I1101 09:32:39.988531 2508765 kubeadm.go:403] duration metric: took 23.872201067s to StartCluster
	I1101 09:32:39.988549 2508765 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:39.988607 2508765 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:32:39.991006 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:39.991253 2508765 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:32:39.991267 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:32:39.991536 2508765 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:39.991626 2508765 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:32:39.991683 2508765 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-703627"
	I1101 09:32:39.991698 2508765 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-703627"
	I1101 09:32:39.991719 2508765 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:32:39.992225 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:39.992711 2508765 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-703627"
	I1101 09:32:39.992733 2508765 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-703627"
	I1101 09:32:39.992994 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:39.995458 2508765 out.go:179] * Verifying Kubernetes components...
	I1101 09:32:40.004015 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:40.048725 2508765 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:32:40.051007 2508765 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-703627"
	I1101 09:32:40.051081 2508765 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:32:40.051592 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:40.051941 2508765 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:32:40.051972 2508765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:32:40.052074 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:40.096162 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:40.115128 2508765 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:32:40.115152 2508765 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:32:40.115215 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:40.145246 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:40.256715 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:32:40.315753 2508765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:32:40.361310 2508765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:32:40.408262 2508765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:32:40.852728 2508765 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 09:32:40.853967 2508765 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-703627" to be "Ready" ...
	W1101 09:32:40.934206 2508765 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-703627" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1101 09:32:40.934232 2508765 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1101 09:32:41.193740 2508765 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1101 09:32:38.559590 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:41.060234 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:41.197609 2508765 addons.go:515] duration metric: took 1.205963151s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1101 09:32:42.859219 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:43.559332 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:45.559723 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:48.059761 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:45.371493 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:47.859206 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	I1101 09:32:49.058540 2506068 pod_ready.go:94] pod "coredns-66bc5c9577-jnqnt" is "Ready"
	I1101 09:32:49.058568 2506068 pod_ready.go:86] duration metric: took 41.005030278s for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.061456 2506068 pod_ready.go:83] waiting for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.069538 2506068 pod_ready.go:94] pod "etcd-embed-certs-312549" is "Ready"
	I1101 09:32:49.069574 2506068 pod_ready.go:86] duration metric: took 8.085109ms for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.071775 2506068 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.076111 2506068 pod_ready.go:94] pod "kube-apiserver-embed-certs-312549" is "Ready"
	I1101 09:32:49.076135 2506068 pod_ready.go:86] duration metric: took 4.305628ms for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.078220 2506068 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.258059 2506068 pod_ready.go:94] pod "kube-controller-manager-embed-certs-312549" is "Ready"
	I1101 09:32:49.258084 2506068 pod_ready.go:86] duration metric: took 179.842529ms for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.457137 2506068 pod_ready.go:83] waiting for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.857443 2506068 pod_ready.go:94] pod "kube-proxy-8d2xs" is "Ready"
	I1101 09:32:49.857470 2506068 pod_ready.go:86] duration metric: took 400.308242ms for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.057830 2506068 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.456766 2506068 pod_ready.go:94] pod "kube-scheduler-embed-certs-312549" is "Ready"
	I1101 09:32:50.456794 2506068 pod_ready.go:86] duration metric: took 398.936473ms for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.456807 2506068 pod_ready.go:40] duration metric: took 42.408721656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:32:50.507038 2506068 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:32:50.510598 2506068 out.go:179] * Done! kubectl is now configured to use "embed-certs-312549" cluster and "default" namespace by default
	W1101 09:32:49.859770 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:52.359520 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:54.858442 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:56.858776 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:59.358446 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:33:01.359702 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.152343316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155789358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155822407Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155844027Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159205567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159237525Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159260687Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163078854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163110812Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163132506Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.166017554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.16604477Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.423340301Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=706cfbdc-6f88-4dff-8857-0a735d946d62 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.424364018Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16643711-99eb-4b80-aa73-724d3d31a769 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.425363334Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=6c74b83a-5412-487b-ac58-9dbc32299b10 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.42547936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.43252892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.433208007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.448531282Z" level=info msg="Created container 4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=6c74b83a-5412-487b-ac58-9dbc32299b10 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.453136833Z" level=info msg="Starting container: 4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252" id=01ee5ead-319f-4606-a25c-6c09d29882c4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.454810444Z" level=info msg="Started container" PID=1715 containerID=4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper id=01ee5ead-319f-4606-a25c-6c09d29882c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d91485d6b53f831188b0439fb9e7c24b95d62c1ad3442cf4bfb397050feae65
	Nov 01 09:32:57 embed-certs-312549 conmon[1713]: conmon 4247865bc1f3f240b555 <ninfo>: container 1715 exited with status 1
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.74261966Z" level=info msg="Removing container: ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.75242299Z" level=info msg="Error loading conmon cgroup of container ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a: cgroup deleted" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.757314177Z" level=info msg="Removed container ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4247865bc1f3f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   2d91485d6b53f       dashboard-metrics-scraper-6ffb444bf9-snsdd   kubernetes-dashboard
	88f21e91d38ee       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   42cbe4d69d691       storage-provisioner                          kube-system
	72995eb1c1da3       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   b24978389d492       kubernetes-dashboard-855c9754f9-gfpxp        kubernetes-dashboard
	3bad1a5a2c564       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   f431879df5193       coredns-66bc5c9577-jnqnt                     kube-system
	c78a62d8b2686       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   8c30da96ce4e0       busybox                                      default
	b61df37594a55       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   4307d6b3bf37e       kube-proxy-8d2xs                             kube-system
	94bc258df31cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   42cbe4d69d691       storage-provisioner                          kube-system
	2e0be9c9bcec6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   4d568b30ba8dc       kindnet-xzrpm                                kube-system
	416e95ed80a8e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c29f8052a2cba       kube-apiserver-embed-certs-312549            kube-system
	ccdcc22e1e214       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   39c68ccff27be       kube-scheduler-embed-certs-312549            kube-system
	830d779c1441c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fc72119385ca9       kube-controller-manager-embed-certs-312549   kube-system
	680ffbebf2250       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   192ef4c6b8338       etcd-embed-certs-312549                      kube-system
	
	
	==> coredns [3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59944 - 26656 "HINFO IN 2218504287334214664.9098039000590878807. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013951563s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-312549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-312549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-312549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_30_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:30:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-312549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-312549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                9d18f598-7720-463f-91f2-ddc5b6ab87e3
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-jnqnt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-embed-certs-312549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-xzrpm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-312549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-312549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-8d2xs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-312549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-snsdd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gfpxp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m25s              kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m33s              kubelet          Node embed-certs-312549 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m33s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s              kubelet          Node embed-certs-312549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s              kubelet          Node embed-certs-312549 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m33s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s              node-controller  Node embed-certs-312549 event: Registered Node embed-certs-312549 in Controller
	  Normal   NodeReady                106s               kubelet          Node embed-certs-312549 status is now: NodeReady
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node embed-certs-312549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node embed-certs-312549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node embed-certs-312549 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-312549 event: Registered Node embed-certs-312549 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa] <==
	{"level":"warn","ts":"2025-11-01T09:32:03.492689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.520153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.542580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.573667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.600522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.634062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.653189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.702430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.714920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.760726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.845897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.866848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.891337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.938651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.977979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.028073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.053190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.078175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.114589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.131933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.179603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.226984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.255749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.309246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.382439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:06 up 18:15,  0 user,  load average: 2.91, 3.41, 3.02
	Linux embed-certs-312549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4] <==
	I1101 09:32:06.927880       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:32:06.948293       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:32:06.948424       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:32:06.948436       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:32:06.948449       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:32:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:32:07.151383       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:32:07.151404       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:32:07.151412       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:32:07.151694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:32:37.151557       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:32:37.151555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:32:37.151658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:32:37.152885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 09:32:38.752228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:32:38.752364       1 metrics.go:72] Registering metrics
	I1101 09:32:38.752457       1 controller.go:711] "Syncing nftables rules"
	I1101 09:32:47.152001       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:32:47.152073       1 main.go:301] handling current node
	I1101 09:32:57.155951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:32:57.155985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed] <==
	I1101 09:32:06.151180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:32:06.152347       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:32:06.161562       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:32:06.161608       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:32:06.162309       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:32:06.165776       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:32:06.165828       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:32:06.171522       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:32:06.171556       1 policy_source.go:240] refreshing policies
	I1101 09:32:06.182440       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:32:06.183405       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:32:06.184109       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:06.193824       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 09:32:06.228301       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:32:06.234038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:32:06.365842       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:32:07.509390       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:32:07.642817       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:32:07.731017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:32:07.746367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:32:07.839277       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.162.178"}
	I1101 09:32:07.860109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.143.94"}
	I1101 09:32:09.658618       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:32:09.809014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:32:09.907164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d] <==
	I1101 09:32:09.400612       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:32:09.400837       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:32:09.400973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:32:09.401112       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:32:09.407541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:09.407634       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:32:09.407665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:32:09.410114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:32:09.416199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:09.419427       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:32:09.423753       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:32:09.425315       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:32:09.427688       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:32:09.427808       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:32:09.427956       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-312549"
	I1101 09:32:09.428030       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:32:09.430687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:32:09.433311       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:32:09.435917       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:32:09.448361       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:32:09.448374       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:32:09.448391       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:32:09.448404       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:32:09.461627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:32:09.462733       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad] <==
	I1101 09:32:07.439926       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:32:07.605591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:32:07.706964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:32:07.707081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:32:07.707188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:32:07.743679       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:32:07.743798       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:32:07.753059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:32:07.753692       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:32:07.753761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:32:07.772778       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:32:07.772797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:32:07.773119       1 config.go:200] "Starting service config controller"
	I1101 09:32:07.773126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:32:07.773418       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:32:07.773426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:32:07.773792       1 config.go:309] "Starting node config controller"
	I1101 09:32:07.773799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:32:07.773804       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:32:07.877928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:32:07.877971       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:32:07.878009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107] <==
	I1101 09:32:05.638976       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:32:05.695089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:32:05.698731       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:32:05.698801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:32:05.698845       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:32:05.944186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:32:05.944271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:32:05.944329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:32:05.944381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:32:05.944429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:32:05.944476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:32:05.944519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:32:05.944566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:32:05.944610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:32:05.944674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:32:05.944727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:32:05.944802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:32:05.944852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:32:05.944902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:32:05.944952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:32:05.944995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:32:05.945147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:32:05.945198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:32:06.091230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 09:32:07.599978       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:32:18 embed-certs-312549 kubelet[778]: I1101 09:32:18.629599     778 scope.go:117] "RemoveContainer" containerID="c96613cfce0d09f2149bad34e372b5276cb8af441a467037c999058f36787cf2"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: I1101 09:32:19.634423     778 scope.go:117] "RemoveContainer" containerID="c96613cfce0d09f2149bad34e372b5276cb8af441a467037c999058f36787cf2"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: I1101 09:32:19.635403     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: E1101 09:32:19.635708     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:20 embed-certs-312549 kubelet[778]: I1101 09:32:20.647449     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:20 embed-certs-312549 kubelet[778]: E1101 09:32:20.648408     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:21 embed-certs-312549 kubelet[778]: I1101 09:32:21.837430     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:21 embed-certs-312549 kubelet[778]: E1101 09:32:21.837636     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.421387     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.677356     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.677635     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: E1101 09:32:33.677806     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.706030     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gfpxp" podStartSLOduration=12.215072658 podStartE2EDuration="24.706010436s" podCreationTimestamp="2025-11-01 09:32:09 +0000 UTC" firstStartedPulling="2025-11-01 09:32:11.910506305 +0000 UTC m=+14.945287813" lastFinishedPulling="2025-11-01 09:32:24.401444083 +0000 UTC m=+27.436225591" observedRunningTime="2025-11-01 09:32:24.677072058 +0000 UTC m=+27.711853583" watchObservedRunningTime="2025-11-01 09:32:33.706010436 +0000 UTC m=+36.740791960"
	Nov 01 09:32:37 embed-certs-312549 kubelet[778]: I1101 09:32:37.690661     778 scope.go:117] "RemoveContainer" containerID="94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	Nov 01 09:32:41 embed-certs-312549 kubelet[778]: I1101 09:32:41.836367     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:41 embed-certs-312549 kubelet[778]: E1101 09:32:41.836555     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.422474     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.740091     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.740676     778 scope.go:117] "RemoveContainer" containerID="4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: E1101 09:32:57.740926     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:33:01 embed-certs-312549 kubelet[778]: I1101 09:33:01.836249     778 scope.go:117] "RemoveContainer" containerID="4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	Nov 01 09:33:01 embed-certs-312549 kubelet[778]: E1101 09:33:01.836899     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a] <==
	2025/11/01 09:32:24 Using namespace: kubernetes-dashboard
	2025/11/01 09:32:24 Using in-cluster config to connect to apiserver
	2025/11/01 09:32:24 Using secret token for csrf signing
	2025/11/01 09:32:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:32:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:32:24 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:32:24 Generating JWE encryption key
	2025/11/01 09:32:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:32:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:32:25 Initializing JWE encryption key from synchronized object
	2025/11/01 09:32:25 Creating in-cluster Sidecar client
	2025/11/01 09:32:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:32:25 Serving insecurely on HTTP port: 9090
	2025/11/01 09:32:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:32:24 Starting overwatch
	
	
	==> storage-provisioner [88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7] <==
	I1101 09:32:37.751531       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:32:37.751586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:32:37.753712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:41.208991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:45.469551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:49.069493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:52.123504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.145464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.150509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:32:55.150733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:32:55.151346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0!
	I1101 09:32:55.151165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f44664d7-ce86-4249-89be-cbecba2dd10b", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0 became leader
	W1101 09:32:55.155657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.161877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:32:55.252170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0!
	W1101 09:32:57.165618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:57.170791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:59.174584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:59.178375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:01.182487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:01.187926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:03.192114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:03.201921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:05.205677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:05.214532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac] <==
	I1101 09:32:07.033557       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:32:37.035424       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312549 -n embed-certs-312549: exit status 2 (377.515388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-312549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-312549
helpers_test.go:243: (dbg) docker inspect embed-certs-312549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	        "Created": "2025-11-01T09:30:05.467452429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2506271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:31:48.438075887Z",
	            "FinishedAt": "2025-11-01T09:31:47.357326892Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/hosts",
	        "LogPath": "/var/lib/docker/containers/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6/46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6-json.log",
	        "Name": "/embed-certs-312549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-312549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-312549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "46c884efd26a2388e1c8d6b8b4b264552137880202618095e6b019b947feb1a6",
	                "LowerDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e51930860c4af8d563e9604029040cab5d84be7600dfc7a374b99215830131ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-312549",
	                "Source": "/var/lib/docker/volumes/embed-certs-312549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-312549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-312549",
	                "name.minikube.sigs.k8s.io": "embed-certs-312549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23ac91cad5b064fc80037bb63bdba2775d89777855afce7df142b857656efb35",
	            "SandboxKey": "/var/run/docker/netns/23ac91cad5b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36360"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36361"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36364"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36362"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36363"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-312549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:a0:cc:5d:83:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3dabe0b25d9c671a5a74ecef725675d174c55efcf863b93a552f738453017d3",
	                    "EndpointID": "9a1255b68c09baf350a1997e1df2b3060ed0bcea51dc8355d0a3e6afca4a0ea9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-312549",
	                        "46c884efd26a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549: exit status 2 (382.169979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-312549 logs -n 25: (1.306099268s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:27 UTC │ 01 Nov 25 09:28 UTC │
	│ image   │ old-k8s-version-068218 image list --format=json                                                                                                                                                                                               │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ pause   │ -p old-k8s-version-068218 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:31:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:31:58.493383 2508765 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:58.493489 2508765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:58.493495 2508765 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:58.493499 2508765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:58.493868 2508765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:31:58.494334 2508765 out.go:368] Setting JSON to false
	I1101 09:31:58.495315 2508765 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65664,"bootTime":1761923854,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:31:58.495397 2508765 start.go:143] virtualization:  
	I1101 09:31:58.515985 2508765 out.go:179] * [default-k8s-diff-port-703627] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:31:58.519129 2508765 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:31:58.520265 2508765 notify.go:221] Checking for updates...
	I1101 09:31:58.528868 2508765 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:31:58.531733 2508765 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:31:58.534563 2508765 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:31:58.537463 2508765 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:31:58.545161 2508765 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:31:58.552571 2508765 config.go:182] Loaded profile config "embed-certs-312549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:58.552698 2508765 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:31:58.642398 2508765 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:31:58.642524 2508765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:58.746040 2508765 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:58.736308945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:58.746146 2508765 docker.go:319] overlay module found
	I1101 09:31:58.749242 2508765 out.go:179] * Using the docker driver based on user configuration
	I1101 09:31:58.752151 2508765 start.go:309] selected driver: docker
	I1101 09:31:58.752172 2508765 start.go:930] validating driver "docker" against <nil>
	I1101 09:31:58.752185 2508765 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:31:58.752862 2508765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:31:58.874709 2508765 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:31:58.865210643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:31:58.874862 2508765 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:31:58.875080 2508765 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:31:58.877976 2508765 out.go:179] * Using Docker driver with root privileges
	I1101 09:31:58.880858 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:31:58.880923 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:31:58.880935 2508765 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:31:58.881032 2508765 start.go:353] cluster config:
	{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:31:58.884280 2508765 out.go:179] * Starting "default-k8s-diff-port-703627" primary control-plane node in "default-k8s-diff-port-703627" cluster
	I1101 09:31:58.887118 2508765 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:31:58.890097 2508765 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:31:58.893199 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:58.893270 2508765 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:31:58.893280 2508765 cache.go:59] Caching tarball of preloaded images
	I1101 09:31:58.893373 2508765 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:31:58.893382 2508765 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:31:58.893490 2508765 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json ...
	I1101 09:31:58.893510 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json: {Name:mk1d062a219f17dfe2538736f6c17f88855efbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:31:58.893664 2508765 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:31:58.917021 2508765 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:31:58.917042 2508765 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:31:58.917055 2508765 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:31:58.917091 2508765 start.go:360] acquireMachinesLock for default-k8s-diff-port-703627: {Name:mk723fbf5d77afd626dac1d43272d3636891d6fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:31:58.917191 2508765 start.go:364] duration metric: took 85.692µs to acquireMachinesLock for "default-k8s-diff-port-703627"
	I1101 09:31:58.917217 2508765 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:31:58.917291 2508765 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:31:58.490975 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:31:58.491006 2506068 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:31:58.491076 2506068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:31:58.533492 2506068 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:31:58.533519 2506068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:31:58.533593 2506068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-312549
	I1101 09:31:58.589127 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.600142 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.611563 2506068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36360 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/embed-certs-312549/id_rsa Username:docker}
	I1101 09:31:58.879240 2506068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:31:58.926013 2506068 node_ready.go:35] waiting up to 6m0s for node "embed-certs-312549" to be "Ready" ...
	I1101 09:31:58.961052 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:31:58.961073 2506068 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:31:58.971307 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:31:59.018780 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:31:59.028119 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:31:59.028141 2506068 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:31:59.126013 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:31:59.126035 2506068 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:31:59.237818 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:31:59.237838 2506068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:31:59.317549 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:31:59.317570 2506068 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:31:59.356208 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:31:59.356236 2506068 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:31:59.384031 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:31:59.384052 2506068 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:31:59.417734 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:31:59.417761 2506068 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:31:59.481444 2506068 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:31:59.481469 2506068 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:31:59.507185 2506068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:31:58.920719 2508765 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:31:58.920963 2508765 start.go:159] libmachine.API.Create for "default-k8s-diff-port-703627" (driver="docker")
	I1101 09:31:58.920992 2508765 client.go:173] LocalClient.Create starting
	I1101 09:31:58.921072 2508765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:31:58.921105 2508765 main.go:143] libmachine: Decoding PEM data...
	I1101 09:31:58.921118 2508765 main.go:143] libmachine: Parsing certificate...
	I1101 09:31:58.921169 2508765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:31:58.921185 2508765 main.go:143] libmachine: Decoding PEM data...
	I1101 09:31:58.921194 2508765 main.go:143] libmachine: Parsing certificate...
	I1101 09:31:58.921561 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:31:58.944328 2508765 cli_runner.go:211] docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:31:58.944412 2508765 network_create.go:284] running [docker network inspect default-k8s-diff-port-703627] to gather additional debugging logs...
	I1101 09:31:58.944428 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627
	W1101 09:31:58.969041 2508765 cli_runner.go:211] docker network inspect default-k8s-diff-port-703627 returned with exit code 1
	I1101 09:31:58.969069 2508765 network_create.go:287] error running [docker network inspect default-k8s-diff-port-703627]: docker network inspect default-k8s-diff-port-703627: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-703627 not found
	I1101 09:31:58.969081 2508765 network_create.go:289] output of [docker network inspect default-k8s-diff-port-703627]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-703627 not found
	
	** /stderr **
	I1101 09:31:58.969178 2508765 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:31:58.991566 2508765 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:31:58.991992 2508765 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:31:58.992364 2508765 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:31:58.992652 2508765 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e3dabe0b25d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:a6:6b:fa:dd:11} reservation:<nil>}
	I1101 09:31:58.993084 2508765 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c37f0}
	I1101 09:31:58.993110 2508765 network_create.go:124] attempt to create docker network default-k8s-diff-port-703627 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 09:31:58.993165 2508765 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 default-k8s-diff-port-703627
	I1101 09:31:59.081490 2508765 network_create.go:108] docker network default-k8s-diff-port-703627 192.168.85.0/24 created
	I1101 09:31:59.081537 2508765 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-703627" container
	I1101 09:31:59.081605 2508765 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:31:59.110085 2508765 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-703627 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:31:59.138633 2508765 oci.go:103] Successfully created a docker volume default-k8s-diff-port-703627
	I1101 09:31:59.138731 2508765 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-703627-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --entrypoint /usr/bin/test -v default-k8s-diff-port-703627:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:31:59.902197 2508765 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-703627
	I1101 09:31:59.902242 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:31:59.902262 2508765 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:31:59.902335 2508765 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-703627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:32:06.091502 2506068 node_ready.go:49] node "embed-certs-312549" is "Ready"
	I1101 09:32:06.091530 2506068 node_ready.go:38] duration metric: took 7.165465648s for node "embed-certs-312549" to be "Ready" ...
	I1101 09:32:06.091544 2506068 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:32:06.091606 2506068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:32:07.987723 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.016385556s)
	I1101 09:32:07.987779 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.968980766s)
	I1101 09:32:07.988160 2506068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.480946857s)
	I1101 09:32:07.988925 2506068 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.897306274s)
	I1101 09:32:07.988946 2506068 api_server.go:72] duration metric: took 9.635448613s to wait for apiserver process to appear ...
	I1101 09:32:07.988952 2506068 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:32:07.988966 2506068 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:32:07.991971 2506068 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-312549 addons enable metrics-server
	
	I1101 09:32:08.002717 2506068 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:32:08.005296 2506068 api_server.go:141] control plane version: v1.34.1
	I1101 09:32:08.005331 2506068 api_server.go:131] duration metric: took 16.371649ms to wait for apiserver health ...
	I1101 09:32:08.005342 2506068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:32:08.011923 2506068 system_pods.go:59] 8 kube-system pods found
	I1101 09:32:08.011956 2506068 system_pods.go:61] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:32:08.011965 2506068 system_pods.go:61] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:32:08.011973 2506068 system_pods.go:61] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:32:08.011980 2506068 system_pods.go:61] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:32:08.011987 2506068 system_pods.go:61] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:32:08.011992 2506068 system_pods.go:61] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:32:08.012000 2506068 system_pods.go:61] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:32:08.012004 2506068 system_pods.go:61] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Running
	I1101 09:32:08.012010 2506068 system_pods.go:74] duration metric: took 6.662068ms to wait for pod list to return data ...
	I1101 09:32:08.012017 2506068 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:32:08.014179 2506068 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:32:08.015492 2506068 default_sa.go:45] found service account: "default"
	I1101 09:32:08.015511 2506068 default_sa.go:55] duration metric: took 3.48906ms for default service account to be created ...
	I1101 09:32:08.015520 2506068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:32:08.016997 2506068 addons.go:515] duration metric: took 9.663189439s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:32:08.019944 2506068 system_pods.go:86] 8 kube-system pods found
	I1101 09:32:08.020029 2506068 system_pods.go:89] "coredns-66bc5c9577-jnqnt" [9c241743-79ee-45ae-a369-2b4407cec026] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:32:08.020055 2506068 system_pods.go:89] "etcd-embed-certs-312549" [52f5de46-d12b-44f9-9616-8e55b58a80e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:32:08.020090 2506068 system_pods.go:89] "kindnet-xzrpm" [9336823d-a6b8-44ac-ba96-9242d7ea9873] Running
	I1101 09:32:08.020118 2506068 system_pods.go:89] "kube-apiserver-embed-certs-312549" [6c11efc0-4c2f-4bd4-abb7-880d4ac3d8d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:32:08.020141 2506068 system_pods.go:89] "kube-controller-manager-embed-certs-312549" [8c47e850-5e66-4940-81fd-c978de94e2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:32:08.020174 2506068 system_pods.go:89] "kube-proxy-8d2xs" [d7bfac1f-401f-4f8d-8584-a5240e63915f] Running
	I1101 09:32:08.020200 2506068 system_pods.go:89] "kube-scheduler-embed-certs-312549" [618c4131-1a72-4c19-92fe-3af613bbe965] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:32:08.020231 2506068 system_pods.go:89] "storage-provisioner" [74ce420a-03e3-4f7c-b544-860b65f44d69] Running
	I1101 09:32:08.020268 2506068 system_pods.go:126] duration metric: took 4.741677ms to wait for k8s-apps to be running ...
	I1101 09:32:08.020296 2506068 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:32:08.020391 2506068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:32:08.036105 2506068 system_svc.go:56] duration metric: took 15.801351ms WaitForService to wait for kubelet
	I1101 09:32:08.036130 2506068 kubeadm.go:587] duration metric: took 9.682630672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:32:08.036150 2506068 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:32:08.043070 2506068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:32:08.043166 2506068 node_conditions.go:123] node cpu capacity is 2
	I1101 09:32:08.043194 2506068 node_conditions.go:105] duration metric: took 7.037409ms to run NodePressure ...
	I1101 09:32:08.043245 2506068 start.go:242] waiting for startup goroutines ...
	I1101 09:32:08.043272 2506068 start.go:247] waiting for cluster config update ...
	I1101 09:32:08.043301 2506068 start.go:256] writing updated cluster config ...
	I1101 09:32:08.043696 2506068 ssh_runner.go:195] Run: rm -f paused
	I1101 09:32:08.047992 2506068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:32:08.053436 2506068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:04.752216 2508765 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-703627:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.849846771s)
	I1101 09:32:04.752249 2508765 kic.go:203] duration metric: took 4.849983227s to extract preloaded images to volume ...
	W1101 09:32:04.752392 2508765 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:32:04.752494 2508765 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:32:04.876293 2508765 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-703627 --name default-k8s-diff-port-703627 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-703627 --network default-k8s-diff-port-703627 --ip 192.168.85.2 --volume default-k8s-diff-port-703627:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:32:05.301050 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Running}}
	I1101 09:32:05.326569 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.352504 2508765 cli_runner.go:164] Run: docker exec default-k8s-diff-port-703627 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:32:05.429416 2508765 oci.go:144] the created container "default-k8s-diff-port-703627" has a running status.
	I1101 09:32:05.429451 2508765 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa...
	I1101 09:32:05.690250 2508765 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:32:05.734124 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.766584 2508765 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:32:05.766602 2508765 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-703627 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:32:05.836135 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:05.868210 2508765 machine.go:94] provisionDockerMachine start ...
	I1101 09:32:05.868316 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:05.895706 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:05.896107 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:05.896124 2508765 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:32:05.896653 2508765 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53610->127.0.0.1:36365: read: connection reset by peer
	I1101 09:32:09.058915 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-703627
	
	I1101 09:32:09.058941 2508765 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-703627"
	I1101 09:32:09.059016 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.081461 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:09.081774 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:09.081786 2508765 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-703627 && echo "default-k8s-diff-port-703627" | sudo tee /etc/hostname
	I1101 09:32:09.241022 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-703627
	
	I1101 09:32:09.241092 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.269970 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:09.270310 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:09.270338 2508765 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-703627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-703627/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-703627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:32:09.435968 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:32:09.436033 2508765 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:32:09.436065 2508765 ubuntu.go:190] setting up certificates
	I1101 09:32:09.436089 2508765 provision.go:84] configureAuth start
	I1101 09:32:09.436177 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:09.458157 2508765 provision.go:143] copyHostCerts
	I1101 09:32:09.458220 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:32:09.458229 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:32:09.458306 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:32:09.458397 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:32:09.458402 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:32:09.458428 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:32:09.458481 2508765 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:32:09.458485 2508765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:32:09.458522 2508765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:32:09.458569 2508765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-703627 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-703627 localhost minikube]
	I1101 09:32:09.861321 2508765 provision.go:177] copyRemoteCerts
	I1101 09:32:09.861430 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:32:09.861486 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:09.879067 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:09.988432 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:32:10.013092 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:32:10.046954 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:32:10.082035 2508765 provision.go:87] duration metric: took 645.891571ms to configureAuth
	I1101 09:32:10.082115 2508765 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:32:10.082365 2508765 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:10.082542 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.107838 2508765 main.go:143] libmachine: Using SSH client type: native
	I1101 09:32:10.108180 2508765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36365 <nil> <nil>}
	I1101 09:32:10.108194 2508765 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:32:10.429928 2508765 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:32:10.429950 2508765 machine.go:97] duration metric: took 4.56171418s to provisionDockerMachine
	I1101 09:32:10.429968 2508765 client.go:176] duration metric: took 11.508961657s to LocalClient.Create
	I1101 09:32:10.429982 2508765 start.go:167] duration metric: took 11.509020774s to libmachine.API.Create "default-k8s-diff-port-703627"
	I1101 09:32:10.429990 2508765 start.go:293] postStartSetup for "default-k8s-diff-port-703627" (driver="docker")
	I1101 09:32:10.430001 2508765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:32:10.430063 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:32:10.430109 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.448257 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.554105 2508765 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:32:10.559547 2508765 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:32:10.559573 2508765 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:32:10.559583 2508765 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:32:10.559654 2508765 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:32:10.559751 2508765 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:32:10.559892 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:32:10.569138 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:32:10.588247 2508765 start.go:296] duration metric: took 158.242356ms for postStartSetup
	I1101 09:32:10.588667 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:10.605395 2508765 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json ...
	I1101 09:32:10.605666 2508765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:32:10.605713 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.622593 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.724894 2508765 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:32:10.729412 2508765 start.go:128] duration metric: took 11.812106507s to createHost
	I1101 09:32:10.729441 2508765 start.go:83] releasing machines lock for "default-k8s-diff-port-703627", held for 11.812240509s
	I1101 09:32:10.729524 2508765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-703627
	I1101 09:32:10.746638 2508765 ssh_runner.go:195] Run: cat /version.json
	I1101 09:32:10.746688 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.746712 2508765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:32:10.746777 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:10.781500 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.793314 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:10.887627 2508765 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:10.980224 2508765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:32:11.018175 2508765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:32:11.022486 2508765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:32:11.022559 2508765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:32:11.052538 2508765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:32:11.052571 2508765 start.go:496] detecting cgroup driver to use...
	I1101 09:32:11.052622 2508765 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:32:11.052718 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:32:11.071196 2508765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:32:11.085059 2508765 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:32:11.085137 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:32:11.104621 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:32:11.125006 2508765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:32:11.255955 2508765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:32:11.376102 2508765 docker.go:234] disabling docker service ...
	I1101 09:32:11.376171 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:32:11.396531 2508765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:32:11.410077 2508765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:32:11.527631 2508765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:32:11.650826 2508765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:32:11.663374 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:32:11.677277 2508765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:32:11.677390 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.685814 2508765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:32:11.685878 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.694241 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.702778 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.711525 2508765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:32:11.719974 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.728362 2508765 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.740826 2508765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:32:11.749756 2508765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:32:11.757062 2508765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:32:11.764039 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:11.903360 2508765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:32:12.062095 2508765 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:32:12.062164 2508765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:32:12.066089 2508765 start.go:564] Will wait 60s for crictl version
	I1101 09:32:12.066153 2508765 ssh_runner.go:195] Run: which crictl
	I1101 09:32:12.069715 2508765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:32:12.097157 2508765 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:32:12.097250 2508765 ssh_runner.go:195] Run: crio --version
	I1101 09:32:12.129509 2508765 ssh_runner.go:195] Run: crio --version
	I1101 09:32:12.171118 2508765 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:32:10.124738 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:12.564882 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:12.174180 2508765 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:32:12.197082 2508765 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:32:12.202284 2508765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:32:12.213992 2508765 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:32:12.214111 2508765 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:32:12.214171 2508765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:32:12.259654 2508765 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:32:12.259681 2508765 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:32:12.259738 2508765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:32:12.307258 2508765 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:32:12.307285 2508765 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:32:12.307294 2508765 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 09:32:12.307378 2508765 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-703627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:32:12.307455 2508765 ssh_runner.go:195] Run: crio config
	I1101 09:32:12.380211 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:32:12.380241 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:32:12.380260 2508765 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:32:12.380282 2508765 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-703627 NodeName:default-k8s-diff-port-703627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:32:12.380432 2508765 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-703627"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:32:12.380503 2508765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:32:12.394231 2508765 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:32:12.394296 2508765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:32:12.404750 2508765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:32:12.420527 2508765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:32:12.436262 2508765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 09:32:12.454091 2508765 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:32:12.460241 2508765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:32:12.472668 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:12.658557 2508765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:32:12.684402 2508765 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627 for IP: 192.168.85.2
	I1101 09:32:12.684483 2508765 certs.go:195] generating shared ca certs ...
	I1101 09:32:12.684550 2508765 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.684783 2508765 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:32:12.684844 2508765 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:32:12.684851 2508765 certs.go:257] generating profile certs ...
	I1101 09:32:12.684913 2508765 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key
	I1101 09:32:12.684929 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt with IP's: []
	I1101 09:32:12.860062 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt ...
	I1101 09:32:12.860096 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: {Name:mk2c762d23e021a8c8564f6a1b66b779c7bdaa56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.860315 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key ...
	I1101 09:32:12.860339 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key: {Name:mkda4c695ebe1c439a322831ac1341a4575dc783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:12.860451 2508765 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36
	I1101 09:32:12.860471 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 09:32:13.432339 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 ...
	I1101 09:32:13.432408 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36: {Name:mkfc59b1ebc5f85eccf6c67497b70990b3cc7f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:13.432650 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36 ...
	I1101 09:32:13.432685 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36: {Name:mkbc11b514f668962e85580866dcccd9f9140198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:13.432833 2508765 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt.3f1ecf36 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt
	I1101 09:32:13.432979 2508765 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key
	I1101 09:32:13.433085 2508765 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key
	I1101 09:32:13.433122 2508765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt with IP's: []
	I1101 09:32:15.557685 2508765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt ...
	I1101 09:32:15.557765 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt: {Name:mk7141a4976d8e6cc10566e85902faa6bb78821a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:15.558001 2508765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key ...
	I1101 09:32:15.558039 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key: {Name:mk5f34ac4f623371024f2515e4c4835a7ef41854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:15.558385 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:32:15.558461 2508765 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:32:15.558504 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:32:15.558555 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:32:15.558615 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:32:15.558663 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:32:15.558744 2508765 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:32:15.560740 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:32:15.589665 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:32:15.610287 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:32:15.630832 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:32:15.661426 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:32:15.681495 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:32:15.701212 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:32:15.722720 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:32:15.750617 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:32:15.786842 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:32:15.818236 2508765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:32:15.847677 2508765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:32:15.860802 2508765 ssh_runner.go:195] Run: openssl version
	I1101 09:32:15.867308 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:32:15.875421 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.879971 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.880044 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:32:15.923493 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:32:15.931919 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:32:15.940541 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.944856 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.944941 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:32:15.987818 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:32:15.997577 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:32:16.017618 2508765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.023426 2508765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.023509 2508765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:32:16.102053 2508765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:32:16.111485 2508765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:32:16.116264 2508765 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:32:16.116336 2508765 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:32:16.116434 2508765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:16.116506 2508765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:16.154361 2508765 cri.go:89] found id: ""
	I1101 09:32:16.154453 2508765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:32:16.165484 2508765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:32:16.174667 2508765 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:32:16.174743 2508765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:32:16.186930 2508765 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:32:16.186954 2508765 kubeadm.go:158] found existing configuration files:
	
	I1101 09:32:16.187022 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 09:32:16.196488 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:32:16.196564 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:32:16.204889 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 09:32:16.215035 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:32:16.215117 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:32:16.223933 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 09:32:16.233289 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:32:16.233371 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:32:16.241658 2508765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 09:32:16.250123 2508765 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:32:16.250195 2508765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:32:16.258339 2508765 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:32:16.345859 2508765 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:32:16.346144 2508765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:32:16.392228 2508765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:32:16.392545 2508765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:32:16.392594 2508765 kubeadm.go:319] OS: Linux
	I1101 09:32:16.392662 2508765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:32:16.392725 2508765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:32:16.392787 2508765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:32:16.392846 2508765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:32:16.392900 2508765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:32:16.392961 2508765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:32:16.393019 2508765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:32:16.393079 2508765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:32:16.393131 2508765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:32:16.474747 2508765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:32:16.474880 2508765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:32:16.474992 2508765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:32:16.484807 2508765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 09:32:15.059775 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:17.559844 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:16.490760 2508765 out.go:252]   - Generating certificates and keys ...
	I1101 09:32:16.490866 2508765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:32:16.490946 2508765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:32:17.039684 2508765 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:32:17.783842 2508765 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:32:18.452093 2508765 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	W1101 09:32:19.564466 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:22.061016 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:19.624060 2508765 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:32:19.904214 2508765 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:32:19.904828 2508765 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-703627 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:32:19.975375 2508765 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:32:19.975970 2508765 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-703627 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 09:32:20.143528 2508765 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:32:21.457290 2508765 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:32:21.660745 2508765 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:32:21.661255 2508765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:32:23.130838 2508765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:32:23.420192 2508765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:32:23.913968 2508765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:32:24.323713 2508765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:32:25.165715 2508765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:32:25.166908 2508765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:32:25.169922 2508765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 09:32:24.061351 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:26.560877 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:25.175491 2508765 out.go:252]   - Booting up control plane ...
	I1101 09:32:25.175603 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:32:25.175725 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:32:25.175891 2508765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:32:25.190506 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:32:25.190619 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:32:25.199100 2508765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:32:25.199788 2508765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:32:25.200204 2508765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:32:25.330580 2508765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:32:25.330705 2508765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:32:26.331005 2508765 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000693161s
	I1101 09:32:26.343370 2508765 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:32:26.343488 2508765 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1101 09:32:26.343588 2508765 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:32:26.343676 2508765 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1101 09:32:29.059141 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:31.059277 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:29.514585 2508765 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.170629236s
	I1101 09:32:32.627510 2508765 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.284120701s
	I1101 09:32:34.345853 2508765 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002313817s
	I1101 09:32:34.366815 2508765 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:32:34.381347 2508765 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:32:34.396523 2508765 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:32:34.396830 2508765 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-703627 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:32:34.408731 2508765 kubeadm.go:319] [bootstrap-token] Using token: rg7jt1.ljre0wz8jdt44ha8
	I1101 09:32:34.411646 2508765 out.go:252]   - Configuring RBAC rules ...
	I1101 09:32:34.411780 2508765 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:32:34.417667 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:32:34.428168 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:32:34.432260 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:32:34.436804 2508765 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:32:34.441110 2508765 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:32:34.755552 2508765 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:32:35.215951 2508765 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:32:35.754165 2508765 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:32:35.755546 2508765 kubeadm.go:319] 
	I1101 09:32:35.755634 2508765 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:32:35.755645 2508765 kubeadm.go:319] 
	I1101 09:32:35.755734 2508765 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:32:35.755743 2508765 kubeadm.go:319] 
	I1101 09:32:35.755773 2508765 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:32:35.755838 2508765 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:32:35.755947 2508765 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:32:35.755960 2508765 kubeadm.go:319] 
	I1101 09:32:35.756016 2508765 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:32:35.756020 2508765 kubeadm.go:319] 
	I1101 09:32:35.756074 2508765 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:32:35.756078 2508765 kubeadm.go:319] 
	I1101 09:32:35.756130 2508765 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:32:35.756205 2508765 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:32:35.756273 2508765 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:32:35.756278 2508765 kubeadm.go:319] 
	I1101 09:32:35.756361 2508765 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:32:35.756438 2508765 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:32:35.756443 2508765 kubeadm.go:319] 
	I1101 09:32:35.756527 2508765 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token rg7jt1.ljre0wz8jdt44ha8 \
	I1101 09:32:35.756630 2508765 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:32:35.756657 2508765 kubeadm.go:319] 	--control-plane 
	I1101 09:32:35.756663 2508765 kubeadm.go:319] 
	I1101 09:32:35.756747 2508765 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:32:35.756751 2508765 kubeadm.go:319] 
	I1101 09:32:35.756833 2508765 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token rg7jt1.ljre0wz8jdt44ha8 \
	I1101 09:32:35.756934 2508765 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:32:35.760694 2508765 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:32:35.760927 2508765 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:32:35.761056 2508765 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:32:35.761079 2508765 cni.go:84] Creating CNI manager for ""
	I1101 09:32:35.761087 2508765 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:32:35.764257 2508765 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:32:33.559557 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:36.060136 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:35.767071 2508765 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:32:35.771810 2508765 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:32:35.771831 2508765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:32:35.785930 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:32:36.180103 2508765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:32:36.180256 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:36.180334 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-703627 minikube.k8s.io/updated_at=2025_11_01T09_32_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=default-k8s-diff-port-703627 minikube.k8s.io/primary=true
	I1101 09:32:36.397143 2508765 ops.go:34] apiserver oom_adj: -16
	I1101 09:32:36.397258 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:36.897530 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:37.398010 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:37.898302 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:38.397315 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:38.897310 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.397560 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.897447 2508765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:32:39.988503 2508765 kubeadm.go:1114] duration metric: took 3.808291297s to wait for elevateKubeSystemPrivileges
	I1101 09:32:39.988531 2508765 kubeadm.go:403] duration metric: took 23.872201067s to StartCluster
	I1101 09:32:39.988549 2508765 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:39.988607 2508765 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:32:39.991006 2508765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:32:39.991253 2508765 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:32:39.991267 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:32:39.991536 2508765 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:39.991626 2508765 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:32:39.991683 2508765 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-703627"
	I1101 09:32:39.991698 2508765 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-703627"
	I1101 09:32:39.991719 2508765 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:32:39.992225 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:39.992711 2508765 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-703627"
	I1101 09:32:39.992733 2508765 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-703627"
	I1101 09:32:39.992994 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:39.995458 2508765 out.go:179] * Verifying Kubernetes components...
	I1101 09:32:40.004015 2508765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:32:40.048725 2508765 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:32:40.051007 2508765 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-703627"
	I1101 09:32:40.051081 2508765 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:32:40.051592 2508765 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:32:40.051941 2508765 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:32:40.051972 2508765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:32:40.052074 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:40.096162 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:40.115128 2508765 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:32:40.115152 2508765 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:32:40.115215 2508765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:32:40.145246 2508765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36365 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:32:40.256715 2508765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:32:40.315753 2508765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:32:40.361310 2508765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:32:40.408262 2508765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:32:40.852728 2508765 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 09:32:40.853967 2508765 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-703627" to be "Ready" ...
	W1101 09:32:40.934206 2508765 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-703627" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1101 09:32:40.934232 2508765 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1101 09:32:41.193740 2508765 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1101 09:32:38.559590 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:41.060234 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	I1101 09:32:41.197609 2508765 addons.go:515] duration metric: took 1.205963151s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1101 09:32:42.859219 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:43.559332 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:45.559723 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:48.059761 2506068 pod_ready.go:104] pod "coredns-66bc5c9577-jnqnt" is not "Ready", error: <nil>
	W1101 09:32:45.371493 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:47.859206 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	I1101 09:32:49.058540 2506068 pod_ready.go:94] pod "coredns-66bc5c9577-jnqnt" is "Ready"
	I1101 09:32:49.058568 2506068 pod_ready.go:86] duration metric: took 41.005030278s for pod "coredns-66bc5c9577-jnqnt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.061456 2506068 pod_ready.go:83] waiting for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.069538 2506068 pod_ready.go:94] pod "etcd-embed-certs-312549" is "Ready"
	I1101 09:32:49.069574 2506068 pod_ready.go:86] duration metric: took 8.085109ms for pod "etcd-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.071775 2506068 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.076111 2506068 pod_ready.go:94] pod "kube-apiserver-embed-certs-312549" is "Ready"
	I1101 09:32:49.076135 2506068 pod_ready.go:86] duration metric: took 4.305628ms for pod "kube-apiserver-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.078220 2506068 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.258059 2506068 pod_ready.go:94] pod "kube-controller-manager-embed-certs-312549" is "Ready"
	I1101 09:32:49.258084 2506068 pod_ready.go:86] duration metric: took 179.842529ms for pod "kube-controller-manager-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.457137 2506068 pod_ready.go:83] waiting for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:49.857443 2506068 pod_ready.go:94] pod "kube-proxy-8d2xs" is "Ready"
	I1101 09:32:49.857470 2506068 pod_ready.go:86] duration metric: took 400.308242ms for pod "kube-proxy-8d2xs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.057830 2506068 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.456766 2506068 pod_ready.go:94] pod "kube-scheduler-embed-certs-312549" is "Ready"
	I1101 09:32:50.456794 2506068 pod_ready.go:86] duration metric: took 398.936473ms for pod "kube-scheduler-embed-certs-312549" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:32:50.456807 2506068 pod_ready.go:40] duration metric: took 42.408721656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:32:50.507038 2506068 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:32:50.510598 2506068 out.go:179] * Done! kubectl is now configured to use "embed-certs-312549" cluster and "default" namespace by default
	W1101 09:32:49.859770 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:52.359520 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:54.858442 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:56.858776 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:32:59.358446 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:33:01.359702 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.152343316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155789358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155822407Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.155844027Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159205567Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159237525Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.159260687Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163078854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163110812Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.163132506Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.166017554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:32:47 embed-certs-312549 crio[653]: time="2025-11-01T09:32:47.16604477Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.423340301Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=706cfbdc-6f88-4dff-8857-0a735d946d62 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.424364018Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16643711-99eb-4b80-aa73-724d3d31a769 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.425363334Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=6c74b83a-5412-487b-ac58-9dbc32299b10 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.42547936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.43252892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.433208007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.448531282Z" level=info msg="Created container 4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=6c74b83a-5412-487b-ac58-9dbc32299b10 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.453136833Z" level=info msg="Starting container: 4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252" id=01ee5ead-319f-4606-a25c-6c09d29882c4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.454810444Z" level=info msg="Started container" PID=1715 containerID=4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper id=01ee5ead-319f-4606-a25c-6c09d29882c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d91485d6b53f831188b0439fb9e7c24b95d62c1ad3442cf4bfb397050feae65
	Nov 01 09:32:57 embed-certs-312549 conmon[1713]: conmon 4247865bc1f3f240b555 <ninfo>: container 1715 exited with status 1
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.74261966Z" level=info msg="Removing container: ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.75242299Z" level=info msg="Error loading conmon cgroup of container ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a: cgroup deleted" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:32:57 embed-certs-312549 crio[653]: time="2025-11-01T09:32:57.757314177Z" level=info msg="Removed container ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd/dashboard-metrics-scraper" id=793a3c15-918d-485c-b2a0-f8f82dd7fc0b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4247865bc1f3f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   2d91485d6b53f       dashboard-metrics-scraper-6ffb444bf9-snsdd   kubernetes-dashboard
	88f21e91d38ee       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   42cbe4d69d691       storage-provisioner                          kube-system
	72995eb1c1da3       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   b24978389d492       kubernetes-dashboard-855c9754f9-gfpxp        kubernetes-dashboard
	3bad1a5a2c564       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   f431879df5193       coredns-66bc5c9577-jnqnt                     kube-system
	c78a62d8b2686       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   8c30da96ce4e0       busybox                                      default
	b61df37594a55       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   4307d6b3bf37e       kube-proxy-8d2xs                             kube-system
	94bc258df31cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   42cbe4d69d691       storage-provisioner                          kube-system
	2e0be9c9bcec6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   4d568b30ba8dc       kindnet-xzrpm                                kube-system
	416e95ed80a8e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c29f8052a2cba       kube-apiserver-embed-certs-312549            kube-system
	ccdcc22e1e214       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   39c68ccff27be       kube-scheduler-embed-certs-312549            kube-system
	830d779c1441c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fc72119385ca9       kube-controller-manager-embed-certs-312549   kube-system
	680ffbebf2250       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   192ef4c6b8338       etcd-embed-certs-312549                      kube-system
	
	
	==> coredns [3bad1a5a2c56426afecd3053392722f520a806428d830ce21c18416e168ff456] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59944 - 26656 "HINFO IN 2218504287334214664.9098039000590878807. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013951563s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-312549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-312549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-312549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_30_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:30:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-312549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:32:36 +0000   Sat, 01 Nov 2025 09:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-312549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                9d18f598-7720-463f-91f2-ddc5b6ab87e3
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-jnqnt                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 etcd-embed-certs-312549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-xzrpm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-embed-certs-312549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-controller-manager-embed-certs-312549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-8d2xs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-embed-certs-312549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-snsdd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gfpxp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m27s              kube-proxy       
	  Normal   Starting                 60s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m35s              kubelet          Node embed-certs-312549 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m35s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s              kubelet          Node embed-certs-312549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s              kubelet          Node embed-certs-312549 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m35s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m30s              node-controller  Node embed-certs-312549 event: Registered Node embed-certs-312549 in Controller
	  Normal   NodeReady                108s               kubelet          Node embed-certs-312549 status is now: NodeReady
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-312549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-312549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-312549 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-312549 event: Registered Node embed-certs-312549 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [680ffbebf225019dcc88b59f2110c463dad6be34ca153a1fc7b184d965991faa] <==
	{"level":"warn","ts":"2025-11-01T09:32:03.492689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.520153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.542580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.573667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.600522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.634062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.653189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.702430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.714920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.760726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.845897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.866848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.891337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.938651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:03.977979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.028073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.053190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.078175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.114589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.131933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.179603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.226984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.255749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.309246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:04.382439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:08 up 18:15,  0 user,  load average: 2.67, 3.35, 3.00
	Linux embed-certs-312549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e0be9c9bcec658ba4517bfd0df151ba737b582e932f77ed6f859646902bd9d4] <==
	I1101 09:32:06.927880       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:32:06.948293       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:32:06.948424       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:32:06.948436       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:32:06.948449       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:32:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:32:07.151383       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:32:07.151404       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:32:07.151412       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:32:07.151694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:32:37.151557       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:32:37.151555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:32:37.151658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:32:37.152885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 09:32:38.752228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:32:38.752364       1 metrics.go:72] Registering metrics
	I1101 09:32:38.752457       1 controller.go:711] "Syncing nftables rules"
	I1101 09:32:47.152001       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:32:47.152073       1 main.go:301] handling current node
	I1101 09:32:57.155951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:32:57.155985       1 main.go:301] handling current node
	I1101 09:33:07.155521       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:33:07.155560       1 main.go:301] handling current node
	
	
	==> kube-apiserver [416e95ed80a8e34d4666b94df66f5dd74615f185d64387cdea0577b26bbc3aed] <==
	I1101 09:32:06.151180       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:32:06.152347       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:32:06.161562       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:32:06.161608       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:32:06.162309       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:32:06.165776       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:32:06.165828       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:32:06.171522       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:32:06.171556       1 policy_source.go:240] refreshing policies
	I1101 09:32:06.182440       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:32:06.183405       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:32:06.184109       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:06.193824       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 09:32:06.228301       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:32:06.234038       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:32:06.365842       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:32:07.509390       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:32:07.642817       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:32:07.731017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:32:07.746367       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:32:07.839277       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.162.178"}
	I1101 09:32:07.860109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.143.94"}
	I1101 09:32:09.658618       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:32:09.809014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:32:09.907164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [830d779c1441c7d2da6563df9cd6c13b42ae8a0d7fba581750fdabee9972e73d] <==
	I1101 09:32:09.400612       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:32:09.400837       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:32:09.400973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:32:09.401112       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:32:09.407541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:09.407634       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:32:09.407665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:32:09.410114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:32:09.416199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:09.419427       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:32:09.423753       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:32:09.425315       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:32:09.427688       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:32:09.427808       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:32:09.427956       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-312549"
	I1101 09:32:09.428030       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:32:09.430687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:32:09.433311       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:32:09.435917       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:32:09.448361       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:32:09.448374       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:32:09.448391       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:32:09.448404       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:32:09.461627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:32:09.462733       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [b61df37594a558df51b12bb67c5ad1aee69b219068de28cbc8e135755adf63ad] <==
	I1101 09:32:07.439926       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:32:07.605591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:32:07.706964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:32:07.707081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:32:07.707188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:32:07.743679       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:32:07.743798       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:32:07.753059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:32:07.753692       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:32:07.753761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:32:07.772778       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:32:07.772797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:32:07.773119       1 config.go:200] "Starting service config controller"
	I1101 09:32:07.773126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:32:07.773418       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:32:07.773426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:32:07.773792       1 config.go:309] "Starting node config controller"
	I1101 09:32:07.773799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:32:07.773804       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:32:07.877928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:32:07.877971       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:32:07.878009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ccdcc22e1e2147d9e6c4608d49f176a9919f42a514223d1fda1375c8f0c44107] <==
	I1101 09:32:05.638976       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:32:05.695089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:32:05.698731       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:32:05.698801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:32:05.698845       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:32:05.944186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:32:05.944271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:32:05.944329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:32:05.944381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:32:05.944429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:32:05.944476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:32:05.944519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:32:05.944566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:32:05.944610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:32:05.944674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:32:05.944727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:32:05.944802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:32:05.944852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:32:05.944902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:32:05.944952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:32:05.944995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:32:05.945147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:32:05.945198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:32:06.091230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 09:32:07.599978       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:32:18 embed-certs-312549 kubelet[778]: I1101 09:32:18.629599     778 scope.go:117] "RemoveContainer" containerID="c96613cfce0d09f2149bad34e372b5276cb8af441a467037c999058f36787cf2"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: I1101 09:32:19.634423     778 scope.go:117] "RemoveContainer" containerID="c96613cfce0d09f2149bad34e372b5276cb8af441a467037c999058f36787cf2"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: I1101 09:32:19.635403     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:19 embed-certs-312549 kubelet[778]: E1101 09:32:19.635708     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:20 embed-certs-312549 kubelet[778]: I1101 09:32:20.647449     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:20 embed-certs-312549 kubelet[778]: E1101 09:32:20.648408     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:21 embed-certs-312549 kubelet[778]: I1101 09:32:21.837430     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:21 embed-certs-312549 kubelet[778]: E1101 09:32:21.837636     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.421387     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.677356     778 scope.go:117] "RemoveContainer" containerID="a6f28ddc6ec5e55b2c0345e908ceb765766b8476573696fed56e703859e1fa5b"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.677635     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: E1101 09:32:33.677806     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:33 embed-certs-312549 kubelet[778]: I1101 09:32:33.706030     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gfpxp" podStartSLOduration=12.215072658 podStartE2EDuration="24.706010436s" podCreationTimestamp="2025-11-01 09:32:09 +0000 UTC" firstStartedPulling="2025-11-01 09:32:11.910506305 +0000 UTC m=+14.945287813" lastFinishedPulling="2025-11-01 09:32:24.401444083 +0000 UTC m=+27.436225591" observedRunningTime="2025-11-01 09:32:24.677072058 +0000 UTC m=+27.711853583" watchObservedRunningTime="2025-11-01 09:32:33.706010436 +0000 UTC m=+36.740791960"
	Nov 01 09:32:37 embed-certs-312549 kubelet[778]: I1101 09:32:37.690661     778 scope.go:117] "RemoveContainer" containerID="94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac"
	Nov 01 09:32:41 embed-certs-312549 kubelet[778]: I1101 09:32:41.836367     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:41 embed-certs-312549 kubelet[778]: E1101 09:32:41.836555     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.422474     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.740091     778 scope.go:117] "RemoveContainer" containerID="ddcefe5efe0dcb19b36708e18ad76abafe74fa123641b6b402c89205d3d0731a"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: I1101 09:32:57.740676     778 scope.go:117] "RemoveContainer" containerID="4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	Nov 01 09:32:57 embed-certs-312549 kubelet[778]: E1101 09:32:57.740926     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:33:01 embed-certs-312549 kubelet[778]: I1101 09:33:01.836249     778 scope.go:117] "RemoveContainer" containerID="4247865bc1f3f240b55592eaebb9c6bd7f2d474b8128881a488f19dcbf493252"
	Nov 01 09:33:01 embed-certs-312549 kubelet[778]: E1101 09:33:01.836899     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-snsdd_kubernetes-dashboard(8de8d7dd-5e50-487c-b2f9-c02e603af168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-snsdd" podUID="8de8d7dd-5e50-487c-b2f9-c02e603af168"
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:33:02 embed-certs-312549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [72995eb1c1da3b7de9fbddf97b960ce6553fff7c8c569ec7720907d1b0ce191a] <==
	2025/11/01 09:32:24 Using namespace: kubernetes-dashboard
	2025/11/01 09:32:24 Using in-cluster config to connect to apiserver
	2025/11/01 09:32:24 Using secret token for csrf signing
	2025/11/01 09:32:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:32:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:32:24 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:32:24 Generating JWE encryption key
	2025/11/01 09:32:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:32:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:32:25 Initializing JWE encryption key from synchronized object
	2025/11/01 09:32:25 Creating in-cluster Sidecar client
	2025/11/01 09:32:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:32:25 Serving insecurely on HTTP port: 9090
	2025/11/01 09:32:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:32:24 Starting overwatch
	
	
	==> storage-provisioner [88f21e91d38eea474220af6738f4c80b59005263d6a122d6f4ea2dbb094eb4e7] <==
	W1101 09:32:37.753712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:41.208991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:45.469551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:49.069493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:52.123504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.145464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.150509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:32:55.150733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:32:55.151346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0!
	I1101 09:32:55.151165       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f44664d7-ce86-4249-89be-cbecba2dd10b", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0 became leader
	W1101 09:32:55.155657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:55.161877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:32:55.252170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-312549_3958039d-628c-46ca-9d04-8c38630256d0!
	W1101 09:32:57.165618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:57.170791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:59.174584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:59.178375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:01.182487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:01.187926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:03.192114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:03.201921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:05.205677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:05.214532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:07.218404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:07.223588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [94bc258df31ccba3243d19817bf0540c4bc6e3b16c101f6d659f8223b0db31ac] <==
	I1101 09:32:07.033557       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:32:37.035424       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312549 -n embed-certs-312549
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312549 -n embed-certs-312549: exit status 2 (370.716258ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-312549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (308.407659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-703627 describe deploy/metrics-server -n kube-system: exit status 1 (107.686096ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-703627 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-703627
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-703627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	        "Created": "2025-11-01T09:32:04.900915027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2509330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:32:04.958974397Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hosts",
	        "LogPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e-json.log",
	        "Name": "/default-k8s-diff-port-703627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-703627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-703627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	                "LowerDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-703627",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-703627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-703627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "351c25a962ca9d2424416b6b283e09a7eb17ce1f1414f11dcb55ed03175aed12",
	            "SandboxKey": "/var/run/docker/netns/351c25a962ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36369"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36367"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36368"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-703627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f5:c2:3b:cd:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f92e55acde037535672c6bdfac6afcfec87a27f01e6451819c4f246fbcbac0db",
	                    "EndpointID": "91b3408dd7fc460f29829bb871e0b09c30d41e310db246b33f278beaa80290b2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-703627",
	                        "a747d7437780"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25: (1.562515792s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-068218                                                                                                                                                                                                                     │ old-k8s-version-068218       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ delete  │ -p cert-expiration-218273                                                                                                                                                                                                                     │ cert-expiration-218273       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:33:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:33:12.212436 2513458 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:12.212639 2513458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:12.212674 2513458 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:12.212695 2513458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:12.212982 2513458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:33:12.213506 2513458 out.go:368] Setting JSON to false
	I1101 09:33:12.214605 2513458 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65738,"bootTime":1761923854,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:33:12.214701 2513458 start.go:143] virtualization:  
	I1101 09:33:12.219190 2513458 out.go:179] * [newest-cni-124713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:33:12.223151 2513458 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:33:12.223220 2513458 notify.go:221] Checking for updates...
	I1101 09:33:12.232347 2513458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:33:12.236039 2513458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:12.239272 2513458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:33:12.242497 2513458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:33:12.245768 2513458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:33:12.249342 2513458 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:12.249476 2513458 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:33:12.281259 2513458 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:33:12.281403 2513458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:12.356328 2513458 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:33:12.345840575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:12.356434 2513458 docker.go:319] overlay module found
	I1101 09:33:12.359761 2513458 out.go:179] * Using the docker driver based on user configuration
	I1101 09:33:12.362722 2513458 start.go:309] selected driver: docker
	I1101 09:33:12.362755 2513458 start.go:930] validating driver "docker" against <nil>
	I1101 09:33:12.362769 2513458 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:33:12.363522 2513458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:12.421123 2513458 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:33:12.41133786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:12.421320 2513458 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:33:12.421353 2513458 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:33:12.421577 2513458 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:33:12.424482 2513458 out.go:179] * Using Docker driver with root privileges
	I1101 09:33:12.427297 2513458 cni.go:84] Creating CNI manager for ""
	I1101 09:33:12.427461 2513458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:12.427480 2513458 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:33:12.427580 2513458 start.go:353] cluster config:
	{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:12.430728 2513458 out.go:179] * Starting "newest-cni-124713" primary control-plane node in "newest-cni-124713" cluster
	I1101 09:33:12.434527 2513458 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:33:12.437578 2513458 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:33:12.440512 2513458 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:12.440566 2513458 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:33:12.440578 2513458 cache.go:59] Caching tarball of preloaded images
	I1101 09:33:12.440636 2513458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:33:12.440700 2513458 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:33:12.440710 2513458 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:33:12.440823 2513458 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:12.440841 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json: {Name:mkdb4bd382a01e6ad14a800d8d7cdf5e8864dd4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:12.460632 2513458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:33:12.460655 2513458 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:33:12.460677 2513458 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:33:12.460708 2513458 start.go:360] acquireMachinesLock for newest-cni-124713: {Name:mkc03165af37613c9c0e7f1c90ff2df91e2b25ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:33:12.460817 2513458 start.go:364] duration metric: took 87.39µs to acquireMachinesLock for "newest-cni-124713"
	I1101 09:33:12.460849 2513458 start.go:93] Provisioning new machine with config: &{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:33:12.460922 2513458 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:33:10.359454 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:33:12.858286 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	I1101 09:33:12.464420 2513458 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:33:12.464646 2513458 start.go:159] libmachine.API.Create for "newest-cni-124713" (driver="docker")
	I1101 09:33:12.464699 2513458 client.go:173] LocalClient.Create starting
	I1101 09:33:12.464799 2513458 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:33:12.464838 2513458 main.go:143] libmachine: Decoding PEM data...
	I1101 09:33:12.464853 2513458 main.go:143] libmachine: Parsing certificate...
	I1101 09:33:12.464917 2513458 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:33:12.464941 2513458 main.go:143] libmachine: Decoding PEM data...
	I1101 09:33:12.464959 2513458 main.go:143] libmachine: Parsing certificate...
	I1101 09:33:12.465334 2513458 cli_runner.go:164] Run: docker network inspect newest-cni-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:33:12.482108 2513458 cli_runner.go:211] docker network inspect newest-cni-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:33:12.482201 2513458 network_create.go:284] running [docker network inspect newest-cni-124713] to gather additional debugging logs...
	I1101 09:33:12.482223 2513458 cli_runner.go:164] Run: docker network inspect newest-cni-124713
	W1101 09:33:12.498572 2513458 cli_runner.go:211] docker network inspect newest-cni-124713 returned with exit code 1
	I1101 09:33:12.498604 2513458 network_create.go:287] error running [docker network inspect newest-cni-124713]: docker network inspect newest-cni-124713: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-124713 not found
	I1101 09:33:12.498631 2513458 network_create.go:289] output of [docker network inspect newest-cni-124713]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-124713 not found
	
	** /stderr **
	I1101 09:33:12.498723 2513458 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:33:12.514931 2513458 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:33:12.515415 2513458 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:33:12.515967 2513458 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:33:12.516428 2513458 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3e460}
	I1101 09:33:12.516452 2513458 network_create.go:124] attempt to create docker network newest-cni-124713 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:33:12.516510 2513458 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-124713 newest-cni-124713
	I1101 09:33:12.577561 2513458 network_create.go:108] docker network newest-cni-124713 192.168.76.0/24 created
	I1101 09:33:12.577593 2513458 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-124713" container
	I1101 09:33:12.577680 2513458 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:33:12.594839 2513458 cli_runner.go:164] Run: docker volume create newest-cni-124713 --label name.minikube.sigs.k8s.io=newest-cni-124713 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:33:12.613556 2513458 oci.go:103] Successfully created a docker volume newest-cni-124713
	I1101 09:33:12.613646 2513458 cli_runner.go:164] Run: docker run --rm --name newest-cni-124713-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-124713 --entrypoint /usr/bin/test -v newest-cni-124713:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:33:13.167597 2513458 oci.go:107] Successfully prepared a docker volume newest-cni-124713
	I1101 09:33:13.167648 2513458 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:13.167668 2513458 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:33:13.167744 2513458 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-124713:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:33:14.858336 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:33:16.858476 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	I1101 09:33:17.508978 2513458 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-124713:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.341189629s)
	I1101 09:33:17.509010 2513458 kic.go:203] duration metric: took 4.341338302s to extract preloaded images to volume ...
	W1101 09:33:17.509164 2513458 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:33:17.509275 2513458 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:33:17.564294 2513458 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-124713 --name newest-cni-124713 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-124713 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-124713 --network newest-cni-124713 --ip 192.168.76.2 --volume newest-cni-124713:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:33:17.852269 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Running}}
	I1101 09:33:17.880560 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:17.905690 2513458 cli_runner.go:164] Run: docker exec newest-cni-124713 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:33:17.961660 2513458 oci.go:144] the created container "newest-cni-124713" has a running status.
	I1101 09:33:17.961696 2513458 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa...
	I1101 09:33:18.211055 2513458 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:33:18.240118 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:18.265847 2513458 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:33:18.265871 2513458 kic_runner.go:114] Args: [docker exec --privileged newest-cni-124713 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:33:18.333175 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:18.360703 2513458 machine.go:94] provisionDockerMachine start ...
	I1101 09:33:18.360801 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:18.388938 2513458 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:18.389288 2513458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I1101 09:33:18.389298 2513458 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:33:18.389885 2513458 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35030->127.0.0.1:36370: read: connection reset by peer
	I1101 09:33:21.548189 2513458 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:33:21.548216 2513458 ubuntu.go:182] provisioning hostname "newest-cni-124713"
	I1101 09:33:21.548283 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:21.578094 2513458 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:21.578403 2513458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I1101 09:33:21.578420 2513458 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-124713 && echo "newest-cni-124713" | sudo tee /etc/hostname
	I1101 09:33:21.757854 2513458 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:33:21.758003 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:21.775787 2513458 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:21.776222 2513458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I1101 09:33:21.776249 2513458 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-124713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-124713/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-124713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:33:21.936052 2513458 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:33:21.936152 2513458 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:33:21.936203 2513458 ubuntu.go:190] setting up certificates
	I1101 09:33:21.936234 2513458 provision.go:84] configureAuth start
	I1101 09:33:21.936322 2513458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:33:21.960223 2513458 provision.go:143] copyHostCerts
	I1101 09:33:21.960284 2513458 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:33:21.960294 2513458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:33:21.961451 2513458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:33:21.961559 2513458 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:33:21.961566 2513458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:33:21.961595 2513458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:33:21.961650 2513458 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:33:21.961655 2513458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:33:21.961677 2513458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:33:21.961722 2513458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.newest-cni-124713 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-124713]
	W1101 09:33:19.358352 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	W1101 09:33:21.358563 2508765 node_ready.go:57] node "default-k8s-diff-port-703627" has "Ready":"False" status (will retry)
	I1101 09:33:21.871994 2508765 node_ready.go:49] node "default-k8s-diff-port-703627" is "Ready"
	I1101 09:33:21.872021 2508765 node_ready.go:38] duration metric: took 41.016223489s for node "default-k8s-diff-port-703627" to be "Ready" ...
	I1101 09:33:21.872035 2508765 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:33:21.872092 2508765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:33:21.900559 2508765 api_server.go:72] duration metric: took 41.909273955s to wait for apiserver process to appear ...
	I1101 09:33:21.900580 2508765 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:33:21.900598 2508765 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 09:33:21.912906 2508765 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 09:33:21.914204 2508765 api_server.go:141] control plane version: v1.34.1
	I1101 09:33:21.914257 2508765 api_server.go:131] duration metric: took 13.669315ms to wait for apiserver health ...
	I1101 09:33:21.914278 2508765 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:33:21.919448 2508765 system_pods.go:59] 9 kube-system pods found
	I1101 09:33:21.919486 2508765 system_pods.go:61] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:21.919495 2508765 system_pods.go:61] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:21.919503 2508765 system_pods.go:61] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running
	I1101 09:33:21.919508 2508765 system_pods.go:61] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:33:21.919515 2508765 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running
	I1101 09:33:21.919525 2508765 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running
	I1101 09:33:21.919531 2508765 system_pods.go:61] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:33:21.919543 2508765 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running
	I1101 09:33:21.919549 2508765 system_pods.go:61] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:33:21.919556 2508765 system_pods.go:74] duration metric: took 5.260235ms to wait for pod list to return data ...
	I1101 09:33:21.919568 2508765 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:33:21.924713 2508765 default_sa.go:45] found service account: "default"
	I1101 09:33:21.924737 2508765 default_sa.go:55] duration metric: took 5.163312ms for default service account to be created ...
	I1101 09:33:21.924747 2508765 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:33:21.928218 2508765 system_pods.go:86] 9 kube-system pods found
	I1101 09:33:21.928254 2508765 system_pods.go:89] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:21.928263 2508765 system_pods.go:89] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:21.928269 2508765 system_pods.go:89] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running
	I1101 09:33:21.928274 2508765 system_pods.go:89] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:33:21.928279 2508765 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running
	I1101 09:33:21.928284 2508765 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running
	I1101 09:33:21.928288 2508765 system_pods.go:89] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:33:21.928293 2508765 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running
	I1101 09:33:21.928303 2508765 system_pods.go:89] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:33:21.928322 2508765 retry.go:31] will retry after 209.06729ms: missing components: kube-dns
	I1101 09:33:22.145126 2508765 system_pods.go:86] 9 kube-system pods found
	I1101 09:33:22.145158 2508765 system_pods.go:89] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:22.145170 2508765 system_pods.go:89] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:22.145176 2508765 system_pods.go:89] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running
	I1101 09:33:22.145182 2508765 system_pods.go:89] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:33:22.145186 2508765 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running
	I1101 09:33:22.145190 2508765 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running
	I1101 09:33:22.145194 2508765 system_pods.go:89] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:33:22.145198 2508765 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running
	I1101 09:33:22.145203 2508765 system_pods.go:89] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:33:22.145218 2508765 retry.go:31] will retry after 272.860253ms: missing components: kube-dns
	I1101 09:33:22.433927 2508765 system_pods.go:86] 9 kube-system pods found
	I1101 09:33:22.433964 2508765 system_pods.go:89] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Running
	I1101 09:33:22.433975 2508765 system_pods.go:89] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:33:22.433980 2508765 system_pods.go:89] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running
	I1101 09:33:22.433987 2508765 system_pods.go:89] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:33:22.433992 2508765 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running
	I1101 09:33:22.433996 2508765 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running
	I1101 09:33:22.434000 2508765 system_pods.go:89] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:33:22.434004 2508765 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running
	I1101 09:33:22.434010 2508765 system_pods.go:89] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:33:22.434017 2508765 system_pods.go:126] duration metric: took 509.263955ms to wait for k8s-apps to be running ...
	I1101 09:33:22.434029 2508765 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:33:22.434084 2508765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:33:22.465425 2508765 system_svc.go:56] duration metric: took 31.385731ms WaitForService to wait for kubelet
	I1101 09:33:22.465450 2508765 kubeadm.go:587] duration metric: took 42.47417092s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:33:22.465468 2508765 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:33:22.486684 2508765 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:33:22.486716 2508765 node_conditions.go:123] node cpu capacity is 2
	I1101 09:33:22.486729 2508765 node_conditions.go:105] duration metric: took 21.255886ms to run NodePressure ...
	I1101 09:33:22.486742 2508765 start.go:242] waiting for startup goroutines ...
	I1101 09:33:22.486751 2508765 start.go:247] waiting for cluster config update ...
	I1101 09:33:22.486762 2508765 start.go:256] writing updated cluster config ...
	I1101 09:33:22.487026 2508765 ssh_runner.go:195] Run: rm -f paused
	I1101 09:33:22.492897 2508765 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:33:22.530762 2508765 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.536537 2508765 pod_ready.go:94] pod "coredns-66bc5c9577-7hh2n" is "Ready"
	I1101 09:33:22.536558 2508765 pod_ready.go:86] duration metric: took 5.765887ms for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.536568 2508765 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.541844 2508765 pod_ready.go:94] pod "coredns-66bc5c9577-mbmf5" is "Ready"
	I1101 09:33:22.541866 2508765 pod_ready.go:86] duration metric: took 5.291603ms for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.544381 2508765 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.549834 2508765 pod_ready.go:94] pod "etcd-default-k8s-diff-port-703627" is "Ready"
	I1101 09:33:22.549856 2508765 pod_ready.go:86] duration metric: took 5.456218ms for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.552546 2508765 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.697901 2508765 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-703627" is "Ready"
	I1101 09:33:22.697945 2508765 pod_ready.go:86] duration metric: took 145.376325ms for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:22.897950 2508765 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:23.296304 2508765 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-703627" is "Ready"
	I1101 09:33:23.296328 2508765 pod_ready.go:86] duration metric: took 398.308707ms for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:23.496411 2508765 pod_ready.go:83] waiting for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:23.896448 2508765 pod_ready.go:94] pod "kube-proxy-6lwj9" is "Ready"
	I1101 09:33:23.896479 2508765 pod_ready.go:86] duration metric: took 400.038842ms for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:24.097275 2508765 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:24.498208 2508765 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-703627" is "Ready"
	I1101 09:33:24.498242 2508765 pod_ready.go:86] duration metric: took 400.928048ms for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:33:24.498254 2508765 pod_ready.go:40] duration metric: took 2.005318035s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:33:24.580572 2508765 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:33:24.584312 2508765 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-703627" cluster and "default" namespace by default
	I1101 09:33:23.161867 2513458 provision.go:177] copyRemoteCerts
	I1101 09:33:23.161935 2513458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:33:23.161973 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.178048 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:23.287939 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:33:23.306503 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:33:23.323471 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:33:23.340758 2513458 provision.go:87] duration metric: took 1.404489571s to configureAuth
	I1101 09:33:23.340784 2513458 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:33:23.340968 2513458 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:23.341080 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.360627 2513458 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:23.360950 2513458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I1101 09:33:23.360974 2513458 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:33:23.621836 2513458 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:33:23.621878 2513458 machine.go:97] duration metric: took 5.261154881s to provisionDockerMachine
	I1101 09:33:23.621904 2513458 client.go:176] duration metric: took 11.157177711s to LocalClient.Create
	I1101 09:33:23.621926 2513458 start.go:167] duration metric: took 11.157280814s to libmachine.API.Create "newest-cni-124713"
	I1101 09:33:23.621938 2513458 start.go:293] postStartSetup for "newest-cni-124713" (driver="docker")
	I1101 09:33:23.621948 2513458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:33:23.622016 2513458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:33:23.622070 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.639025 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:23.744124 2513458 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:33:23.747739 2513458 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:33:23.747769 2513458 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:33:23.747781 2513458 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:33:23.747834 2513458 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:33:23.747955 2513458 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:33:23.748060 2513458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:33:23.755290 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:33:23.773574 2513458 start.go:296] duration metric: took 151.608964ms for postStartSetup
	I1101 09:33:23.773918 2513458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:33:23.790606 2513458 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:23.790875 2513458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:33:23.790934 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.807946 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:23.912772 2513458 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:33:23.917120 2513458 start.go:128] duration metric: took 11.456170764s to createHost
	I1101 09:33:23.917180 2513458 start.go:83] releasing machines lock for "newest-cni-124713", held for 11.456348154s
	I1101 09:33:23.917288 2513458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:33:23.933204 2513458 ssh_runner.go:195] Run: cat /version.json
	I1101 09:33:23.933260 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.933374 2513458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:33:23.933538 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:23.960973 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:23.981505 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:24.162409 2513458 ssh_runner.go:195] Run: systemctl --version
	I1101 09:33:24.168909 2513458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:33:24.205153 2513458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:33:24.209399 2513458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:33:24.209486 2513458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:33:24.237528 2513458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:33:24.237550 2513458 start.go:496] detecting cgroup driver to use...
	I1101 09:33:24.237583 2513458 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:33:24.237633 2513458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:33:24.255243 2513458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:33:24.268438 2513458 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:33:24.268500 2513458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:33:24.285862 2513458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:33:24.303049 2513458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:33:24.414336 2513458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:33:24.546391 2513458 docker.go:234] disabling docker service ...
	I1101 09:33:24.546480 2513458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:33:24.577227 2513458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:33:24.598527 2513458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:33:24.778149 2513458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:33:24.945747 2513458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:33:24.958372 2513458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:33:24.973826 2513458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:33:24.973893 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:24.984055 2513458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:33:24.984118 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:24.993223 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:25.001323 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:25.014681 2513458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:33:25.024365 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:25.034068 2513458 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:25.048542 2513458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:33:25.057149 2513458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:33:25.065042 2513458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:33:25.073086 2513458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:25.212484 2513458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:33:25.359691 2513458 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:33:25.359784 2513458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:33:25.364436 2513458 start.go:564] Will wait 60s for crictl version
	I1101 09:33:25.364543 2513458 ssh_runner.go:195] Run: which crictl
	I1101 09:33:25.367774 2513458 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:33:25.393490 2513458 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:33:25.393601 2513458 ssh_runner.go:195] Run: crio --version
	I1101 09:33:25.425129 2513458 ssh_runner.go:195] Run: crio --version
	I1101 09:33:25.467989 2513458 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:33:25.470908 2513458 cli_runner.go:164] Run: docker network inspect newest-cni-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:33:25.486652 2513458 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:33:25.490602 2513458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:25.502905 2513458 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:33:25.505883 2513458 kubeadm.go:884] updating cluster {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:33:25.506043 2513458 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:25.506128 2513458 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:25.541852 2513458 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:25.541879 2513458 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:33:25.541935 2513458 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:25.567343 2513458 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:25.567370 2513458 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:33:25.567379 2513458 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:33:25.567466 2513458 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-124713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:33:25.567548 2513458 ssh_runner.go:195] Run: crio config
	I1101 09:33:25.621869 2513458 cni.go:84] Creating CNI manager for ""
	I1101 09:33:25.621888 2513458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:25.621908 2513458 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:33:25.621931 2513458 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-124713 NodeName:newest-cni-124713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:33:25.622049 2513458 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-124713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:33:25.622116 2513458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:33:25.629793 2513458 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:33:25.629902 2513458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:33:25.637311 2513458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:33:25.650710 2513458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:33:25.663457 2513458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 09:33:25.675928 2513458 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:33:25.679459 2513458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:25.688858 2513458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:25.824537 2513458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:25.841690 2513458 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713 for IP: 192.168.76.2
	I1101 09:33:25.841713 2513458 certs.go:195] generating shared ca certs ...
	I1101 09:33:25.841729 2513458 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:25.841913 2513458 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:33:25.841974 2513458 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:33:25.841988 2513458 certs.go:257] generating profile certs ...
	I1101 09:33:25.842060 2513458 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.key
	I1101 09:33:25.842091 2513458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.crt with IP's: []
	I1101 09:33:26.658988 2513458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.crt ...
	I1101 09:33:26.659019 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.crt: {Name:mk33f5edbec2b269a432aafa09a4c8d9260bb465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:26.659254 2513458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.key ...
	I1101 09:33:26.659270 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.key: {Name:mkaed4d5ae37cc5082cbe8337882db535d184a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:26.659374 2513458 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe
	I1101 09:33:26.659393 2513458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt.7e7354fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:33:27.071428 2513458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt.7e7354fe ...
	I1101 09:33:27.071458 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt.7e7354fe: {Name:mk66b57715857dfa6ffeefcf39164fd6e7dcb3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:27.071622 2513458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe ...
	I1101 09:33:27.071636 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe: {Name:mk4cf04524b1434a5b3a5ff23f58e76b17f41f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:27.071726 2513458 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt.7e7354fe -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt
	I1101 09:33:27.071819 2513458 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key
	I1101 09:33:27.071901 2513458 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key
	I1101 09:33:27.071922 2513458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt with IP's: []
	I1101 09:33:27.738929 2513458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt ...
	I1101 09:33:27.738961 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt: {Name:mkd6267e008a77d4c179e06d303ee39f3fc1063a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:27.739158 2513458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key ...
	I1101 09:33:27.739174 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key: {Name:mk0fa5380261e7bee65dfea81095b7046cb9bafb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:27.739391 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:33:27.739434 2513458 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:33:27.739444 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:33:27.739468 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:33:27.739513 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:33:27.739545 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:33:27.739593 2513458 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:33:27.740199 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:33:27.758153 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:33:27.778086 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:33:27.797105 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:33:27.813987 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:33:27.831176 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:33:27.849056 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:33:27.870342 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:33:27.896414 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:33:27.913866 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:33:27.930993 2513458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:33:27.949361 2513458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:33:27.962351 2513458 ssh_runner.go:195] Run: openssl version
	I1101 09:33:27.968563 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:33:27.976779 2513458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:33:27.980608 2513458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:33:27.980677 2513458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:33:28.022183 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:33:28.031371 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:33:28.041054 2513458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:28.045152 2513458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:28.045313 2513458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:28.087319 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:33:28.096427 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:33:28.105398 2513458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:33:28.109087 2513458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:33:28.109154 2513458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:33:28.152188 2513458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:33:28.161104 2513458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:33:28.164455 2513458 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:33:28.164506 2513458 kubeadm.go:401] StartCluster: {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:28.164579 2513458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:33:28.164637 2513458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:33:28.193017 2513458 cri.go:89] found id: ""
	I1101 09:33:28.193097 2513458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:33:28.200784 2513458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:33:28.208218 2513458 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:33:28.208298 2513458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:33:28.216676 2513458 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:33:28.216694 2513458 kubeadm.go:158] found existing configuration files:
	
	I1101 09:33:28.216744 2513458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:33:28.224378 2513458 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:33:28.224484 2513458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:33:28.231793 2513458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:33:28.239325 2513458 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:33:28.239389 2513458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:33:28.246699 2513458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:33:28.254331 2513458 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:33:28.254415 2513458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:33:28.261646 2513458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:33:28.269647 2513458 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:33:28.269708 2513458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:33:28.279263 2513458 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:33:28.330332 2513458 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:33:28.330687 2513458 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:33:28.355370 2513458 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:33:28.355452 2513458 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:33:28.355498 2513458 kubeadm.go:319] OS: Linux
	I1101 09:33:28.355552 2513458 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:33:28.355607 2513458 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:33:28.355660 2513458 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:33:28.355716 2513458 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:33:28.355776 2513458 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:33:28.355830 2513458 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:33:28.355950 2513458 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:33:28.356006 2513458 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:33:28.356058 2513458 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:33:28.422644 2513458 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:33:28.422844 2513458 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:33:28.422996 2513458 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:33:28.429945 2513458 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:33:28.435091 2513458 out.go:252]   - Generating certificates and keys ...
	I1101 09:33:28.435185 2513458 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:33:28.435261 2513458 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:33:29.651649 2513458 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:33:31.117319 2513458 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:33:31.517624 2513458 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:33:31.867196 2513458 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	
	
	==> CRI-O <==
	Nov 01 09:33:22 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:22.18999588Z" level=info msg="Created container 392421b7d04db3753ca054dcfd4448241673447c8a8cbbda5d0929b992df5ff1: kube-system/coredns-66bc5c9577-7hh2n/coredns" id=e89a0d4d-76fe-4eb5-8486-dbed7b6f9362 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:22 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:22.191151188Z" level=info msg="Starting container: 392421b7d04db3753ca054dcfd4448241673447c8a8cbbda5d0929b992df5ff1" id=7b9d5bef-ad91-43c8-b04f-e0ccecc09576 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:33:22 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:22.216073491Z" level=info msg="Started container" PID=1759 containerID=392421b7d04db3753ca054dcfd4448241673447c8a8cbbda5d0929b992df5ff1 description=kube-system/coredns-66bc5c9577-7hh2n/coredns id=7b9d5bef-ad91-43c8-b04f-e0ccecc09576 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efb306374ed0cff2bae2a645e2b18970a65206998129d6dc731c342f85d0247b
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.180551213Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3d6f7a7c-d3f4-462b-b071-5c4a2b5d4682 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.180620085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.200390023Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605 UID:016ea7a0-d76a-42a7-82a6-75f154f119e9 NetNS:/var/run/netns/6f6f478b-429e-433e-ac21-75e5c22edb4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.200549871Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.231263554Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605 UID:016ea7a0-d76a-42a7-82a6-75f154f119e9 NetNS:/var/run/netns/6f6f478b-429e-433e-ac21-75e5c22edb4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.231413892Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.238291396Z" level=info msg="Ran pod sandbox 4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605 with infra container: default/busybox/POD" id=3d6f7a7c-d3f4-462b-b071-5c4a2b5d4682 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.240502149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d62edb3b-b058-460d-ae6d-91e66990ada3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.240636324Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d62edb3b-b058-460d-ae6d-91e66990ada3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.240685168Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d62edb3b-b058-460d-ae6d-91e66990ada3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.243190288Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eccb8c67-7120-4b21-a96d-d6e8748ca2f5 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:33:25 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:25.247494956Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.283095006Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=eccb8c67-7120-4b21-a96d-d6e8748ca2f5 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.283823708Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3059de7-920a-40b2-9703-5596c6b2b4b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.287962523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df076239-673b-4b8b-b2d1-fee060bdc5ec name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.295292116Z" level=info msg="Creating container: default/busybox/busybox" id=40146ff8-e3be-4503-a05f-4f6c19ff1d09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.295425036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.305766583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.306243058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.329834592Z" level=info msg="Created container fa8eafb1ce66a0d3c5890cf93a11bb452edd2e3d214f2fc05fe539e7fd5964ce: default/busybox/busybox" id=40146ff8-e3be-4503-a05f-4f6c19ff1d09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.333480325Z" level=info msg="Starting container: fa8eafb1ce66a0d3c5890cf93a11bb452edd2e3d214f2fc05fe539e7fd5964ce" id=9c880a5c-a55e-43c4-ad38-aaa941026f0f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:33:27 default-k8s-diff-port-703627 crio[836]: time="2025-11-01T09:33:27.337759673Z" level=info msg="Started container" PID=1822 containerID=fa8eafb1ce66a0d3c5890cf93a11bb452edd2e3d214f2fc05fe539e7fd5964ce description=default/busybox/busybox id=9c880a5c-a55e-43c4-ad38-aaa941026f0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fa8eafb1ce66a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   4d36c9726abca       busybox                                                default
	392421b7d04db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   efb306374ed0c       coredns-66bc5c9577-7hh2n                               kube-system
	1e73948f6e4d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   a7b0a980fd1bb       coredns-66bc5c9577-mbmf5                               kube-system
	73f69edf97a9c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   f87bfaea2850c       storage-provisioner                                    kube-system
	29bdfccc2283f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   7ab9256bdad9b       kube-proxy-6lwj9                                       kube-system
	1b706b33666a3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   d4c9393879af8       kindnet-td2vz                                          kube-system
	b59d1abf287ef       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   8472c2184b796       kube-apiserver-default-k8s-diff-port-703627            kube-system
	8aaff57d118fd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   2f39cdf57ceaf       kube-scheduler-default-k8s-diff-port-703627            kube-system
	451841f517e87       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   ae7454bcb8e65       etcd-default-k8s-diff-port-703627                      kube-system
	bb9a03d29788b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b23ef67714d43       kube-controller-manager-default-k8s-diff-port-703627   kube-system
	
	
	==> coredns [1e73948f6e4d85c02bc6d9cb716fe106f3e1a3b7dae66141671575c06f047d07] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59523 - 29682 "HINFO IN 129294946929900623.4222368255339783849. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005260285s
	
	
	==> coredns [392421b7d04db3753ca054dcfd4448241673447c8a8cbbda5d0929b992df5ff1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36450 - 32113 "HINFO IN 8306929027649132184.329086083654953507. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017408338s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-703627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-703627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-703627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_32_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-703627
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:33:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:33:26 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:33:26 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:33:26 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:33:26 +0000   Sat, 01 Nov 2025 09:33:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-703627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                715daf08-52c6-47e9-9d22-22f4a756b35f
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-7hh2n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 coredns-66bc5c9577-mbmf5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-703627                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-td2vz                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-703627             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-703627    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-6lwj9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-703627             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-703627 event: Registered Node default-k8s-diff-port-703627 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-703627 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 09:12] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [451841f517e87528d17d5d59e201adfce65d4b73b129aee2250588ef9b32e46b] <==
	{"level":"warn","ts":"2025-11-01T09:32:31.348419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.365493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.381516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.399085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.413941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.430047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.458829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.465577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.480192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.494865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.522114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.536673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.551333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.588303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.610790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.633352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.636805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.659623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.674398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.689269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.704136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.734091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.748297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.762946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:32:31.830256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58700","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:35 up 18:16,  0 user,  load average: 3.48, 3.48, 3.06
	Linux default-k8s-diff-port-703627 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b706b33666a313ec88a2bfe1458705c7fe57ee20d2bd01a5c2ca20223712240] <==
	I1101 09:32:41.250372       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:32:41.250707       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:32:41.250857       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:32:41.250896       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:32:41.250927       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:32:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:32:41.449220       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:32:41.449285       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:32:41.449318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:32:41.450206       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:33:11.449958       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:33:11.450156       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:33:11.450235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:33:11.450321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 09:33:12.949922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:33:12.949952       1 metrics.go:72] Registering metrics
	I1101 09:33:12.950010       1 controller.go:711] "Syncing nftables rules"
	I1101 09:33:21.452226       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:33:21.452330       1 main.go:301] handling current node
	I1101 09:33:31.451934       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:33:31.452043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b59d1abf287efa9ad429746f2d716bb90496cb07975d207c74b3d2d9db03b3b3] <==
	I1101 09:32:32.678632       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:32:32.678637       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:32:32.683577       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 09:32:32.691613       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:32:32.693032       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:32.715226       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:32.715332       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:32:32.887815       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:32:33.363034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:32:33.370734       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:32:33.370757       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:32:34.138550       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:32:34.190456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:32:34.316088       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:32:34.323519       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:32:34.324655       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:32:34.329638       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:32:34.526506       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:32:35.193493       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:32:35.214541       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:32:35.235329       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:32:40.379203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:32:40.435759       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:40.446622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:32:40.636212       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [bb9a03d29788befc575ee0ca99573800197b9f7bbcd9fd783a9dd4522226251d] <==
	I1101 09:32:39.588075       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:32:39.589275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:32:39.589293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:32:39.590402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:32:39.599712       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:32:39.599808       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:39.607138       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:32:39.608215       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:32:39.616430       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:32:39.620213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:32:39.620589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:32:39.620609       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:32:39.620615       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:32:39.622797       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:32:39.622860       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:32:39.622837       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:32:39.624122       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:32:39.624635       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:32:39.629823       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:32:39.629879       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:32:39.629904       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:32:39.629918       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:32:39.629929       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:32:39.638439       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-703627" podCIDRs=["10.244.0.0/24"]
	I1101 09:33:24.578600       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [29bdfccc2283fc2da8a069c925a77ff714ec0271f09a6e565bea94c6b51e3445] <==
	I1101 09:32:41.181104       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:32:41.275414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:32:41.375994       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:32:41.376031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:32:41.376147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:32:41.394762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:32:41.394817       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:32:41.398857       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:32:41.399175       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:32:41.399197       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:32:41.400344       1 config.go:200] "Starting service config controller"
	I1101 09:32:41.400415       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:32:41.404059       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:32:41.404080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:32:41.404099       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:32:41.404104       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:32:41.404739       1 config.go:309] "Starting node config controller"
	I1101 09:32:41.404754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:32:41.404761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:32:41.501161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:32:41.504478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:32:41.504492       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8aaff57d118fd07253cd8d4d8b3e2487ea4d591add43e45f07cd8e3c03ee8fc0] <==
	E1101 09:32:32.617333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:32:32.628310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:32:32.631894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 09:32:32.631970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:32:32.632034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:32:32.632068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:32:32.632102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:32:32.632138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:32:32.632173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:32:32.632209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:32:33.486653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:32:33.502566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:32:33.525121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:32:33.542341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:32:33.542445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:32:33.562169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:32:33.620351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:32:33.623380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:32:33.708456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:32:33.747899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:32:33.776814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:32:33.844988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:32:33.846558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:32:33.914864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 09:32:36.897652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732298    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0d693ff-55a9-4906-891d-28f7d9849789-xtables-lock\") pod \"kindnet-td2vz\" (UID: \"b0d693ff-55a9-4906-891d-28f7d9849789\") " pod="kube-system/kindnet-td2vz"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732417    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmzl\" (UniqueName: \"kubernetes.io/projected/b0d693ff-55a9-4906-891d-28f7d9849789-kube-api-access-qvmzl\") pod \"kindnet-td2vz\" (UID: \"b0d693ff-55a9-4906-891d-28f7d9849789\") " pod="kube-system/kindnet-td2vz"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732509    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f48fe986-0db5-425e-a988-0396b9bd45a8-kube-proxy\") pod \"kube-proxy-6lwj9\" (UID: \"f48fe986-0db5-425e-a988-0396b9bd45a8\") " pod="kube-system/kube-proxy-6lwj9"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732595    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f48fe986-0db5-425e-a988-0396b9bd45a8-xtables-lock\") pod \"kube-proxy-6lwj9\" (UID: \"f48fe986-0db5-425e-a988-0396b9bd45a8\") " pod="kube-system/kube-proxy-6lwj9"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732690    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f48fe986-0db5-425e-a988-0396b9bd45a8-lib-modules\") pod \"kube-proxy-6lwj9\" (UID: \"f48fe986-0db5-425e-a988-0396b9bd45a8\") " pod="kube-system/kube-proxy-6lwj9"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732795    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b0d693ff-55a9-4906-891d-28f7d9849789-cni-cfg\") pod \"kindnet-td2vz\" (UID: \"b0d693ff-55a9-4906-891d-28f7d9849789\") " pod="kube-system/kindnet-td2vz"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.732883    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d693ff-55a9-4906-891d-28f7d9849789-lib-modules\") pod \"kindnet-td2vz\" (UID: \"b0d693ff-55a9-4906-891d-28f7d9849789\") " pod="kube-system/kindnet-td2vz"
	Nov 01 09:32:40 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:40.922135    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:32:41 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:41.310432    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6lwj9" podStartSLOduration=1.310412509 podStartE2EDuration="1.310412509s" podCreationTimestamp="2025-11-01 09:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:32:41.285824514 +0000 UTC m=+6.262495164" watchObservedRunningTime="2025-11-01 09:32:41.310412509 +0000 UTC m=+6.287083159"
	Nov 01 09:32:46 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:32:46.090752    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-td2vz" podStartSLOduration=6.090734546 podStartE2EDuration="6.090734546s" podCreationTimestamp="2025-11-01 09:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:32:41.310708435 +0000 UTC m=+6.287379101" watchObservedRunningTime="2025-11-01 09:32:46.090734546 +0000 UTC m=+11.067405195"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.546806    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740673    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w4tq\" (UniqueName: \"kubernetes.io/projected/102037a1-7d8b-49cc-9a86-be75b4bfdcfe-kube-api-access-8w4tq\") pod \"storage-provisioner\" (UID: \"102037a1-7d8b-49cc-9a86-be75b4bfdcfe\") " pod="kube-system/storage-provisioner"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740736    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a206c0-1b3c-477f-a1c8-63a1f5c04dac-config-volume\") pod \"coredns-66bc5c9577-7hh2n\" (UID: \"27a206c0-1b3c-477f-a1c8-63a1f5c04dac\") " pod="kube-system/coredns-66bc5c9577-7hh2n"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740759    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d919bbe5-a51f-497a-ae3b-e76e42dfb5c4-config-volume\") pod \"coredns-66bc5c9577-mbmf5\" (UID: \"d919bbe5-a51f-497a-ae3b-e76e42dfb5c4\") " pod="kube-system/coredns-66bc5c9577-mbmf5"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740778    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/102037a1-7d8b-49cc-9a86-be75b4bfdcfe-tmp\") pod \"storage-provisioner\" (UID: \"102037a1-7d8b-49cc-9a86-be75b4bfdcfe\") " pod="kube-system/storage-provisioner"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740798    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gvm\" (UniqueName: \"kubernetes.io/projected/27a206c0-1b3c-477f-a1c8-63a1f5c04dac-kube-api-access-w7gvm\") pod \"coredns-66bc5c9577-7hh2n\" (UID: \"27a206c0-1b3c-477f-a1c8-63a1f5c04dac\") " pod="kube-system/coredns-66bc5c9577-7hh2n"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:21.740818    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkm7\" (UniqueName: \"kubernetes.io/projected/d919bbe5-a51f-497a-ae3b-e76e42dfb5c4-kube-api-access-vzkm7\") pod \"coredns-66bc5c9577-mbmf5\" (UID: \"d919bbe5-a51f-497a-ae3b-e76e42dfb5c4\") " pod="kube-system/coredns-66bc5c9577-mbmf5"
	Nov 01 09:33:21 default-k8s-diff-port-703627 kubelet[1309]: W1101 09:33:21.931254    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-f87bfaea2850c89475b5516918a65f597c9272f0d9109ff3284682f430c00961 WatchSource:0}: Error finding container f87bfaea2850c89475b5516918a65f597c9272f0d9109ff3284682f430c00961: Status 404 returned error can't find the container with id f87bfaea2850c89475b5516918a65f597c9272f0d9109ff3284682f430c00961
	Nov 01 09:33:22 default-k8s-diff-port-703627 kubelet[1309]: W1101 09:33:22.086398    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-efb306374ed0cff2bae2a645e2b18970a65206998129d6dc731c342f85d0247b WatchSource:0}: Error finding container efb306374ed0cff2bae2a645e2b18970a65206998129d6dc731c342f85d0247b: Status 404 returned error can't find the container with id efb306374ed0cff2bae2a645e2b18970a65206998129d6dc731c342f85d0247b
	Nov 01 09:33:22 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:22.403838    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7hh2n" podStartSLOduration=42.40381875 podStartE2EDuration="42.40381875s" podCreationTimestamp="2025-11-01 09:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:22.38266138 +0000 UTC m=+47.359332046" watchObservedRunningTime="2025-11-01 09:33:22.40381875 +0000 UTC m=+47.380489400"
	Nov 01 09:33:22 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:22.494470    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mbmf5" podStartSLOduration=42.49444168 podStartE2EDuration="42.49444168s" podCreationTimestamp="2025-11-01 09:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:22.442726397 +0000 UTC m=+47.419397063" watchObservedRunningTime="2025-11-01 09:33:22.49444168 +0000 UTC m=+47.471112330"
	Nov 01 09:33:24 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:24.865271    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.865252865 podStartE2EDuration="43.865252865s" podCreationTimestamp="2025-11-01 09:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:22.49593183 +0000 UTC m=+47.472602521" watchObservedRunningTime="2025-11-01 09:33:24.865252865 +0000 UTC m=+49.841923514"
	Nov 01 09:33:24 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:24.972843    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphfw\" (UniqueName: \"kubernetes.io/projected/016ea7a0-d76a-42a7-82a6-75f154f119e9-kube-api-access-hphfw\") pod \"busybox\" (UID: \"016ea7a0-d76a-42a7-82a6-75f154f119e9\") " pod="default/busybox"
	Nov 01 09:33:25 default-k8s-diff-port-703627 kubelet[1309]: W1101 09:33:25.234974    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605 WatchSource:0}: Error finding container 4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605: Status 404 returned error can't find the container with id 4d36c9726abcab96448c2b9903a95fc7af773aaff8ae41771c6158755e7d4605
	Nov 01 09:33:27 default-k8s-diff-port-703627 kubelet[1309]: I1101 09:33:27.397520    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.353739722 podStartE2EDuration="3.397504428s" podCreationTimestamp="2025-11-01 09:33:24 +0000 UTC" firstStartedPulling="2025-11-01 09:33:25.240989988 +0000 UTC m=+50.217660646" lastFinishedPulling="2025-11-01 09:33:27.284754702 +0000 UTC m=+52.261425352" observedRunningTime="2025-11-01 09:33:27.397357995 +0000 UTC m=+52.374028653" watchObservedRunningTime="2025-11-01 09:33:27.397504428 +0000 UTC m=+52.374175086"
	
	
	==> storage-provisioner [73f69edf97a9c86e115fae019dccd03fecddee3c98d0decfd2d572e7264411c1] <==
	I1101 09:33:22.045438       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:33:22.087733       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:33:22.087784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:33:22.090872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:22.098922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:33:22.099071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:33:22.100849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_02dfa7a5-7739-4b12-bbae-b1efeac46c7d!
	I1101 09:33:22.101898       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2b0c60a-1c26-4e31-8638-769a7831ea66", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-703627_02dfa7a5-7739-4b12-bbae-b1efeac46c7d became leader
	W1101 09:33:22.107971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:22.125795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:33:22.203946       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_02dfa7a5-7739-4b12-bbae-b1efeac46c7d!
	W1101 09:33:24.128621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:24.134041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:26.138177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:26.147118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:28.150798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:28.156686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:30.160528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:30.166841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:32.170522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:32.177808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:34.180971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:34.190699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (406.765292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-124713
helpers_test.go:243: (dbg) docker inspect newest-cni-124713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	        "Created": "2025-11-01T09:33:17.58430684Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2513855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:33:17.648814163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728-json.log",
	        "Name": "/newest-cni-124713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-124713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-124713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	                "LowerDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-124713",
	                "Source": "/var/lib/docker/volumes/newest-cni-124713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-124713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-124713",
	                "name.minikube.sigs.k8s.io": "newest-cni-124713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c0947736ceefea29b2074fd46e025f4e43fef0f50b07132bd545e9b0d900d2b",
	            "SandboxKey": "/var/run/docker/netns/4c0947736cee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36370"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36371"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36374"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36372"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36373"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-124713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:b6:67:5c:a1:c5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbec865c42edd0a01496f38885614d185989b6c702231d2c3f85ce55dc4aabc5",
	                    "EndpointID": "0cd4c039a763dff9a8a9a214432d93658d696d855749fdb25f76884d64068f19",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-124713",
	                        "d1b820a5201f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25: (1.29063945s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-357229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ stop    │ -p no-preload-357229 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-703627 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:33:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:33:48.727737 2516487 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:48.727969 2516487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:48.727999 2516487 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:48.728018 2516487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:48.728287 2516487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:33:48.728778 2516487 out.go:368] Setting JSON to false
	I1101 09:33:48.729826 2516487 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65775,"bootTime":1761923854,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:33:48.729917 2516487 start.go:143] virtualization:  
	I1101 09:33:48.735323 2516487 out.go:179] * [default-k8s-diff-port-703627] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:33:48.739355 2516487 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:33:48.739427 2516487 notify.go:221] Checking for updates...
	I1101 09:33:48.745238 2516487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:33:48.748105 2516487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:48.750978 2516487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:33:48.753886 2516487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:33:48.756745 2516487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:33:48.760247 2516487 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:48.760872 2516487 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:33:48.797637 2516487 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:33:48.797763 2516487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:48.866941 2516487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:33:48.856973636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:48.867051 2516487 docker.go:319] overlay module found
	I1101 09:33:48.872089 2516487 out.go:179] * Using the docker driver based on existing profile
	I1101 09:33:48.875054 2516487 start.go:309] selected driver: docker
	I1101 09:33:48.875078 2516487 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:48.875194 2516487 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:33:48.876096 2516487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:48.936475 2516487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:33:48.926209188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:48.936818 2516487 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:33:48.936849 2516487 cni.go:84] Creating CNI manager for ""
	I1101 09:33:48.936902 2516487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:48.936946 2516487 start.go:353] cluster config:
	{Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:48.941831 2516487 out.go:179] * Starting "default-k8s-diff-port-703627" primary control-plane node in "default-k8s-diff-port-703627" cluster
	I1101 09:33:48.944619 2516487 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:33:48.947496 2516487 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:33:48.950288 2516487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:48.950346 2516487 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:33:48.950358 2516487 cache.go:59] Caching tarball of preloaded images
	I1101 09:33:48.950387 2516487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:33:48.950454 2516487 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:33:48.950464 2516487 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:33:48.950584 2516487 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/config.json ...
	I1101 09:33:48.970053 2516487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:33:48.970072 2516487 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:33:48.970097 2516487 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:33:48.970128 2516487 start.go:360] acquireMachinesLock for default-k8s-diff-port-703627: {Name:mk723fbf5d77afd626dac1d43272d3636891d6fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:33:48.970196 2516487 start.go:364] duration metric: took 51.436µs to acquireMachinesLock for "default-k8s-diff-port-703627"
	I1101 09:33:48.970216 2516487 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:33:48.970222 2516487 fix.go:54] fixHost starting: 
	I1101 09:33:48.970485 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:48.989885 2516487 fix.go:112] recreateIfNeeded on default-k8s-diff-port-703627: state=Stopped err=<nil>
	W1101 09:33:48.989912 2516487 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:33:47.321163 2513458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:33:47.321319 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-124713 minikube.k8s.io/updated_at=2025_11_01T09_33_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=newest-cni-124713 minikube.k8s.io/primary=true
	I1101 09:33:47.321321 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:47.487064 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:47.487164 2513458 ops.go:34] apiserver oom_adj: -16
	I1101 09:33:47.988065 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:48.487389 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:48.988180 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:49.487618 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:49.987723 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:50.487807 2513458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:33:50.616877 2513458 kubeadm.go:1114] duration metric: took 3.295618742s to wait for elevateKubeSystemPrivileges
	I1101 09:33:50.616902 2513458 kubeadm.go:403] duration metric: took 22.452398465s to StartCluster
	I1101 09:33:50.616919 2513458 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:50.616978 2513458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:50.617683 2513458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:50.617885 2513458 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:33:50.618050 2513458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:33:50.618316 2513458 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:50.618354 2513458 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:33:50.618416 2513458 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-124713"
	I1101 09:33:50.618430 2513458 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-124713"
	I1101 09:33:50.618450 2513458 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:33:50.618946 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:50.619443 2513458 addons.go:70] Setting default-storageclass=true in profile "newest-cni-124713"
	I1101 09:33:50.619466 2513458 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-124713"
	I1101 09:33:50.619729 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:50.623930 2513458 out.go:179] * Verifying Kubernetes components...
	I1101 09:33:50.626833 2513458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:50.655997 2513458 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:33:50.659036 2513458 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:50.659057 2513458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:33:50.659123 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:50.662587 2513458 addons.go:239] Setting addon default-storageclass=true in "newest-cni-124713"
	I1101 09:33:50.662625 2513458 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:33:50.663068 2513458 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:50.705293 2513458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:50.705313 2513458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:33:50.705374 2513458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:50.711946 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:50.755927 2513458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:33:50.977302 2513458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:33:50.999034 2513458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:51.050509 2513458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:51.101348 2513458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:51.561292 2513458 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:33:51.563033 2513458 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:33:51.563094 2513458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:33:51.975995 2513458 api_server.go:72] duration metric: took 1.358080624s to wait for apiserver process to appear ...
	I1101 09:33:51.976125 2513458 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:33:51.976142 2513458 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:33:51.981412 2513458 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 09:33:51.984208 2513458 addons.go:515] duration metric: took 1.365813209s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 09:33:51.988778 2513458 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:33:51.989891 2513458 api_server.go:141] control plane version: v1.34.1
	I1101 09:33:51.989917 2513458 api_server.go:131] duration metric: took 13.785152ms to wait for apiserver health ...
	I1101 09:33:51.989939 2513458 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:33:51.993139 2513458 system_pods.go:59] 8 kube-system pods found
	I1101 09:33:51.993176 2513458 system_pods.go:61] "coredns-66bc5c9577-qkv9l" [a2ef7fa8-3194-409f-a0f6-ece0ba2f87fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:33:51.993188 2513458 system_pods.go:61] "etcd-newest-cni-124713" [77c1f287-1fd4-4f3e-98c4-eff8afed33ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:33:51.993221 2513458 system_pods.go:61] "kindnet-4szq6" [dfa514f9-f59f-40fc-86c0-0005e842ee44] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:33:51.993243 2513458 system_pods.go:61] "kube-apiserver-newest-cni-124713" [8b1990b4-e307-4233-8887-5fb43000794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:33:51.993254 2513458 system_pods.go:61] "kube-controller-manager-newest-cni-124713" [78bce883-2129-458e-b59e-ff30b3aa124a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:33:51.993261 2513458 system_pods.go:61] "kube-proxy-b69rf" [0f001764-a3b7-4774-86b6-ab740da66ac4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:33:51.993269 2513458 system_pods.go:61] "kube-scheduler-newest-cni-124713" [eeea8f33-465e-40e5-a730-9edd13ae1d26] Running
	I1101 09:33:51.993275 2513458 system_pods.go:61] "storage-provisioner" [bdc61907-9695-405b-8300-5fd746e2180c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:33:51.993298 2513458 system_pods.go:74] duration metric: took 3.330952ms to wait for pod list to return data ...
	I1101 09:33:51.993314 2513458 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:33:51.995874 2513458 default_sa.go:45] found service account: "default"
	I1101 09:33:51.995893 2513458 default_sa.go:55] duration metric: took 2.572121ms for default service account to be created ...
	I1101 09:33:51.995905 2513458 kubeadm.go:587] duration metric: took 1.377997782s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:33:51.995943 2513458 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:33:51.998945 2513458 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:33:51.998978 2513458 node_conditions.go:123] node cpu capacity is 2
	I1101 09:33:51.998991 2513458 node_conditions.go:105] duration metric: took 3.035789ms to run NodePressure ...
	I1101 09:33:51.999025 2513458 start.go:242] waiting for startup goroutines ...
	I1101 09:33:52.065196 2513458 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-124713" context rescaled to 1 replicas
	I1101 09:33:52.065234 2513458 start.go:247] waiting for cluster config update ...
	I1101 09:33:52.065270 2513458 start.go:256] writing updated cluster config ...
	I1101 09:33:52.065660 2513458 ssh_runner.go:195] Run: rm -f paused
	I1101 09:33:52.129711 2513458 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:33:52.133112 2513458 out.go:179] * Done! kubectl is now configured to use "newest-cni-124713" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.762433813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.769631751Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0c8e6dc5-a9df-41c2-ab76-21cf5881e19d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.783715824Z" level=info msg="Ran pod sandbox 699b3a0ba3a65825f081ba874674a2b7ca77db074616f27e57cb47045205350a with infra container: kube-system/kube-proxy-b69rf/POD" id=0c8e6dc5-a9df-41c2-ab76-21cf5881e19d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.7862967Z" level=info msg="Running pod sandbox: kube-system/kindnet-4szq6/POD" id=bb6c6b27-c6e8-4105-b363-f51e33a143b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.786358023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.799306686Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f252b54e-6de6-4042-9d18-570afb81abde name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.800217405Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb6c6b27-c6e8-4105-b363-f51e33a143b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.807387692Z" level=info msg="Ran pod sandbox e518e358feadf0061f2c853401d2b50c9da0529fffc96cc37898438f17c86e79 with infra container: kube-system/kindnet-4szq6/POD" id=bb6c6b27-c6e8-4105-b363-f51e33a143b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.813559254Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a4c50f81-abeb-4a38-9d7e-8052099dc911 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.816671258Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f401968c-9bc5-4b3a-a71c-554ce738bf85 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.819063225Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9dfe0bb4-de1c-4a8c-831f-42d1b44e938a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.829546659Z" level=info msg="Creating container: kube-system/kube-proxy-b69rf/kube-proxy" id=d9f03b77-ceda-4571-bf3e-0e52dd48501d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.829807403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.83526143Z" level=info msg="Creating container: kube-system/kindnet-4szq6/kindnet-cni" id=c86b7f44-c136-47d8-9f1d-19cabdb79678 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.835508356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.842434203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.843054814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.844454717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.845809232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.873299915Z" level=info msg="Created container 918c78ff25ff9e3635fc93408355bc7953250b65e72f566472263fc8119fab78: kube-system/kindnet-4szq6/kindnet-cni" id=c86b7f44-c136-47d8-9f1d-19cabdb79678 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.874408291Z" level=info msg="Starting container: 918c78ff25ff9e3635fc93408355bc7953250b65e72f566472263fc8119fab78" id=5bc2f874-813b-4128-a605-230294dfca3f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.876180131Z" level=info msg="Started container" PID=1487 containerID=918c78ff25ff9e3635fc93408355bc7953250b65e72f566472263fc8119fab78 description=kube-system/kindnet-4szq6/kindnet-cni id=5bc2f874-813b-4128-a605-230294dfca3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e518e358feadf0061f2c853401d2b50c9da0529fffc96cc37898438f17c86e79
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.886560692Z" level=info msg="Created container a16fe1442fe22723365daa26e313d51e53c1dc76d180da50c262d99e34a25e6c: kube-system/kube-proxy-b69rf/kube-proxy" id=d9f03b77-ceda-4571-bf3e-0e52dd48501d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.888453595Z" level=info msg="Starting container: a16fe1442fe22723365daa26e313d51e53c1dc76d180da50c262d99e34a25e6c" id=aa950cb6-21ea-42be-b453-7026209ab47d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:33:51 newest-cni-124713 crio[835]: time="2025-11-01T09:33:51.891126243Z" level=info msg="Started container" PID=1491 containerID=a16fe1442fe22723365daa26e313d51e53c1dc76d180da50c262d99e34a25e6c description=kube-system/kube-proxy-b69rf/kube-proxy id=aa950cb6-21ea-42be-b453-7026209ab47d name=/runtime.v1.RuntimeService/StartContainer sandboxID=699b3a0ba3a65825f081ba874674a2b7ca77db074616f27e57cb47045205350a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a16fe1442fe22       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   699b3a0ba3a65       kube-proxy-b69rf                            kube-system
	918c78ff25ff9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   e518e358feadf       kindnet-4szq6                               kube-system
	fa4d433e645f0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   5607ea2215ae8       kube-controller-manager-newest-cni-124713   kube-system
	c8ee860d06a19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   39dbf0d38482a       etcd-newest-cni-124713                      kube-system
	e5a11148da676       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   10c7bfdb4652d       kube-apiserver-newest-cni-124713            kube-system
	be27b5f937671       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   1cbd33a679f04       kube-scheduler-newest-cni-124713            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-124713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-124713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-124713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:33:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-124713
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:33:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:33:46 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:33:46 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:33:46 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:33:46 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-124713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                640263e5-c6bc-4077-95b9-66d3ed0270b1
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-124713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-4szq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-124713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-124713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-b69rf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-124713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-124713 event: Registered Node newest-cni-124713 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:13] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:14] overlayfs: idmapped layers are currently not supported
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8ee860d06a19fc6fd67d1ae6330fd0a59f287e4e3c3e6922eaf86da2d58e5a5] <==
	{"level":"warn","ts":"2025-11-01T09:33:42.048792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.086379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.133857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.220500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.222887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.260243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.288482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.316019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.347608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.395938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.421797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.458443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.480351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.520402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.560861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.561794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.573214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.587727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.602478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.617610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.638331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.662120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.676649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.691266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:33:42.756305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:53 up 18:16,  0 user,  load average: 3.24, 3.43, 3.05
	Linux newest-cni-124713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [918c78ff25ff9e3635fc93408355bc7953250b65e72f566472263fc8119fab78] <==
	I1101 09:33:52.051415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:33:52.051702       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:33:52.051845       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:33:52.051876       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:33:52.051892       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:33:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:33:52.253006       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:33:52.253080       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:33:52.253114       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:33:52.253262       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e5a11148da676f31a43ab1d4f9037108d7b89d680b4d3bdd4b72dc2ae592eb25] <==
	I1101 09:33:43.519058       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:33:43.522710       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:33:43.522949       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1101 09:33:43.580339       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 09:33:43.587698       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:33:43.587769       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:33:43.668713       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:33:44.273249       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:33:44.277448       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:33:44.277474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:33:44.959401       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:33:45.016904       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	E1101 09:33:45.221091       1 repairip.go:372] "Unhandled Error" err="the ClusterIP [IPv4]: 10.96.0.1 for Service default/kubernetes is not allocated; repairing" logger="UnhandledError"
	I1101 09:33:45.223765       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:33:45.243843       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:33:45.245402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:33:45.253151       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:33:45.391312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:33:46.399898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:33:46.419440       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:33:46.430542       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:33:51.200794       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:33:51.394616       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:33:51.542209       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:33:51.554261       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [fa4d433e645f0e4f0e040aeb6f46c0efb86ffb78aeaabd8475e41ee655108a7c] <==
	I1101 09:33:50.413059       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:33:50.413175       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:33:50.413270       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-124713"
	I1101 09:33:50.413330       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:33:50.423698       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:33:50.432364       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:33:50.432384       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:33:50.436893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:33:50.436918       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:33:50.436925       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:33:50.437293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:33:50.443326       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:33:50.443431       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:33:50.443470       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:33:50.443494       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:33:50.443508       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:33:50.443548       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:33:50.443608       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:33:50.444041       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:33:50.444429       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:33:50.444570       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:33:50.444582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:33:50.444595       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:33:50.444604       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:33:50.447658       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [a16fe1442fe22723365daa26e313d51e53c1dc76d180da50c262d99e34a25e6c] <==
	I1101 09:33:51.942254       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:33:52.061551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:33:52.167004       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:33:52.167034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:33:52.167102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:33:52.197761       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:33:52.197877       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:33:52.201941       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:33:52.202283       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:33:52.202455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:33:52.203659       1 config.go:200] "Starting service config controller"
	I1101 09:33:52.203714       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:33:52.203761       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:33:52.203787       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:33:52.203821       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:33:52.203889       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:33:52.204559       1 config.go:309] "Starting node config controller"
	I1101 09:33:52.204615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:33:52.204644       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:33:52.304273       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:33:52.304308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:33:52.304350       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be27b5f9376711e6fc7529e8471c994077ab15b37ce6776f10e5166e8ff0ff23] <==
	E1101 09:33:43.490040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:33:43.490191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:33:43.490326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:33:43.490479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:33:43.490584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:33:43.490763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:33:43.490903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:33:43.494742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:33:43.495199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:33:43.495299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:33:43.495402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:33:43.495513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:33:43.495646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:33:44.368221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:33:44.384527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:33:44.440313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:33:44.442638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:33:44.447452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:33:44.455118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:33:44.474392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:33:44.489840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:33:44.640118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:33:44.693734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:33:44.843161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1101 09:33:47.058217       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:33:46 newest-cni-124713 kubelet[1309]: I1101 09:33:46.556875    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/012eed52c8ad42bfff101e6b31395ed5-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-124713\" (UID: \"012eed52c8ad42bfff101e6b31395ed5\") " pod="kube-system/kube-controller-manager-newest-cni-124713"
	Nov 01 09:33:46 newest-cni-124713 kubelet[1309]: I1101 09:33:46.580690    1309 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-124713"
	Nov 01 09:33:46 newest-cni-124713 kubelet[1309]: I1101 09:33:46.596817    1309 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-124713"
	Nov 01 09:33:46 newest-cni-124713 kubelet[1309]: I1101 09:33:46.596925    1309 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-124713"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.301272    1309 apiserver.go:52] "Watching apiserver"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.345267    1309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.407051    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: E1101 09:33:47.420011    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-124713\" already exists" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.476841    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-124713" podStartSLOduration=2.476820484 podStartE2EDuration="2.476820484s" podCreationTimestamp="2025-11-01 09:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:47.444060514 +0000 UTC m=+1.224561629" watchObservedRunningTime="2025-11-01 09:33:47.476820484 +0000 UTC m=+1.257321565"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.497138    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-124713" podStartSLOduration=1.497109209 podStartE2EDuration="1.497109209s" podCreationTimestamp="2025-11-01 09:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:47.477049934 +0000 UTC m=+1.257551023" watchObservedRunningTime="2025-11-01 09:33:47.497109209 +0000 UTC m=+1.277610299"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.497250    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-124713" podStartSLOduration=1.4972452459999999 podStartE2EDuration="1.497245246s" podCreationTimestamp="2025-11-01 09:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:47.496926872 +0000 UTC m=+1.277427986" watchObservedRunningTime="2025-11-01 09:33:47.497245246 +0000 UTC m=+1.277746336"
	Nov 01 09:33:47 newest-cni-124713 kubelet[1309]: I1101 09:33:47.535125    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-124713" podStartSLOduration=1.535104266 podStartE2EDuration="1.535104266s" podCreationTimestamp="2025-11-01 09:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:47.519220309 +0000 UTC m=+1.299721398" watchObservedRunningTime="2025-11-01 09:33:47.535104266 +0000 UTC m=+1.315605364"
	Nov 01 09:33:50 newest-cni-124713 kubelet[1309]: I1101 09:33:50.419099    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:33:50 newest-cni-124713 kubelet[1309]: I1101 09:33:50.420352    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494391    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f001764-a3b7-4774-86b6-ab740da66ac4-kube-proxy\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494449    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-lib-modules\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494470    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-xtables-lock\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494485    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-lib-modules\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494513    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-xtables-lock\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494533    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7ghn\" (UniqueName: \"kubernetes.io/projected/0f001764-a3b7-4774-86b6-ab740da66ac4-kube-api-access-b7ghn\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494554    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-cni-cfg\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.494577    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq65q\" (UniqueName: \"kubernetes.io/projected/dfa514f9-f59f-40fc-86c0-0005e842ee44-kube-api-access-mq65q\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:33:51 newest-cni-124713 kubelet[1309]: I1101 09:33:51.652770    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:33:52 newest-cni-124713 kubelet[1309]: I1101 09:33:52.506255    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4szq6" podStartSLOduration=1.5062355410000001 podStartE2EDuration="1.506235541s" podCreationTimestamp="2025-11-01 09:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:52.463477367 +0000 UTC m=+6.243978473" watchObservedRunningTime="2025-11-01 09:33:52.506235541 +0000 UTC m=+6.286736631"
	Nov 01 09:33:52 newest-cni-124713 kubelet[1309]: I1101 09:33:52.537366    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b69rf" podStartSLOduration=1.537292074 podStartE2EDuration="1.537292074s" podCreationTimestamp="2025-11-01 09:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:33:52.506663623 +0000 UTC m=+6.287164738" watchObservedRunningTime="2025-11-01 09:33:52.537292074 +0000 UTC m=+6.317793164"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-124713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qkv9l storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner: exit status 1 (117.800096ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qkv9l" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-124713 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-124713 --alsologtostderr -v=1: exit status 80 (2.313794631s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-124713 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:17.680752 2521236 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:17.680868 2521236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:17.680873 2521236 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:17.680878 2521236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:17.681226 2521236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:34:17.681529 2521236 out.go:368] Setting JSON to false
	I1101 09:34:17.681554 2521236 mustload.go:66] Loading cluster: newest-cni-124713
	I1101 09:34:17.682212 2521236 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:17.682876 2521236 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:17.707829 2521236 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:17.708174 2521236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:17.821794 2521236 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-01 09:34:17.811614904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:17.822439 2521236 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-124713 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:34:17.825723 2521236 out.go:179] * Pausing node newest-cni-124713 ... 
	I1101 09:34:17.828717 2521236 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:17.829097 2521236 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:17.829138 2521236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:17.864275 2521236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:18.028257 2521236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:18.062536 2521236 pause.go:52] kubelet running: true
	I1101 09:34:18.062608 2521236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:34:18.375840 2521236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:34:18.375940 2521236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:34:18.468066 2521236 cri.go:89] found id: "1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b"
	I1101 09:34:18.468128 2521236 cri.go:89] found id: "7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e"
	I1101 09:34:18.468146 2521236 cri.go:89] found id: "a8cd9348cd8f6c0612fd1783c7194b83214ace4b8f1e42197ad6df9e56662e12"
	I1101 09:34:18.468164 2521236 cri.go:89] found id: "0055fe7d149aaac0d9114c4fce265f810e5bcbdf0bac632b3be71cba9a166106"
	I1101 09:34:18.468183 2521236 cri.go:89] found id: "eafecd62f9287b62383e0e205bf02befe8b40647500ac451c7a98c4b9d33b883"
	I1101 09:34:18.468214 2521236 cri.go:89] found id: "87351c6e0eb295b8873ea951caef05b3e21649cfc05ef5547ec27729a4256b5c"
	I1101 09:34:18.468239 2521236 cri.go:89] found id: ""
	I1101 09:34:18.468325 2521236 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:18.480415 2521236 retry.go:31] will retry after 283.95534ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:18Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:34:18.764976 2521236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:18.778941 2521236 pause.go:52] kubelet running: false
	I1101 09:34:18.779042 2521236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:34:18.987488 2521236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:34:18.987604 2521236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:34:19.068840 2521236 cri.go:89] found id: "1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b"
	I1101 09:34:19.068863 2521236 cri.go:89] found id: "7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e"
	I1101 09:34:19.068868 2521236 cri.go:89] found id: "a8cd9348cd8f6c0612fd1783c7194b83214ace4b8f1e42197ad6df9e56662e12"
	I1101 09:34:19.068872 2521236 cri.go:89] found id: "0055fe7d149aaac0d9114c4fce265f810e5bcbdf0bac632b3be71cba9a166106"
	I1101 09:34:19.068876 2521236 cri.go:89] found id: "eafecd62f9287b62383e0e205bf02befe8b40647500ac451c7a98c4b9d33b883"
	I1101 09:34:19.068880 2521236 cri.go:89] found id: "87351c6e0eb295b8873ea951caef05b3e21649cfc05ef5547ec27729a4256b5c"
	I1101 09:34:19.068883 2521236 cri.go:89] found id: ""
	I1101 09:34:19.068943 2521236 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:19.086239 2521236 retry.go:31] will retry after 468.652055ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:19Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:34:19.555622 2521236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:19.569187 2521236 pause.go:52] kubelet running: false
	I1101 09:34:19.569258 2521236 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:34:19.769918 2521236 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:34:19.770018 2521236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:34:19.852233 2521236 cri.go:89] found id: "1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b"
	I1101 09:34:19.852255 2521236 cri.go:89] found id: "7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e"
	I1101 09:34:19.852260 2521236 cri.go:89] found id: "a8cd9348cd8f6c0612fd1783c7194b83214ace4b8f1e42197ad6df9e56662e12"
	I1101 09:34:19.852264 2521236 cri.go:89] found id: "0055fe7d149aaac0d9114c4fce265f810e5bcbdf0bac632b3be71cba9a166106"
	I1101 09:34:19.852268 2521236 cri.go:89] found id: "eafecd62f9287b62383e0e205bf02befe8b40647500ac451c7a98c4b9d33b883"
	I1101 09:34:19.852271 2521236 cri.go:89] found id: "87351c6e0eb295b8873ea951caef05b3e21649cfc05ef5547ec27729a4256b5c"
	I1101 09:34:19.852274 2521236 cri.go:89] found id: ""
	I1101 09:34:19.852349 2521236 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:19.868094 2521236 out.go:203] 
	W1101 09:34:19.871051 2521236 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:34:19.871072 2521236 out.go:285] * 
	* 
	W1101 09:34:19.883684 2521236 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:34:19.885915 2521236 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-124713 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-124713
helpers_test.go:243: (dbg) docker inspect newest-cni-124713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	        "Created": "2025-11-01T09:33:17.58430684Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2518846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:33:57.161451023Z",
	            "FinishedAt": "2025-11-01T09:33:56.121072471Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728-json.log",
	        "Name": "/newest-cni-124713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-124713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-124713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	                "LowerDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-124713",
	                "Source": "/var/lib/docker/volumes/newest-cni-124713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-124713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-124713",
	                "name.minikube.sigs.k8s.io": "newest-cni-124713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b27bcb4cf8bd34929941a4d72b503a2749c1e431d0025f48b8fe4c6bf39edc16",
	            "SandboxKey": "/var/run/docker/netns/b27bcb4cf8bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36380"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36381"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36384"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36382"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36383"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-124713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:7f:c5:84:1b:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbec865c42edd0a01496f38885614d185989b6c702231d2c3f85ce55dc4aabc5",
	                    "EndpointID": "4af02415245de5e4e87d20bc0dfbbf9ec0ffc0ef8cdc57122e7d98ba20e93d5c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-124713",
	                        "d1b820a5201f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713: exit status 2 (441.866849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25: (1.676078224s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-703627 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p newest-cni-124713 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-124713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ image   │ newest-cni-124713 image list --format=json                                                                                                                                                                                                    │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p newest-cni-124713 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:33:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:33:56.762708 2518640 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:56.762923 2518640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:56.762946 2518640 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:56.762964 2518640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:56.763244 2518640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:33:56.763674 2518640 out.go:368] Setting JSON to false
	I1101 09:33:56.764635 2518640 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65783,"bootTime":1761923854,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:33:56.764738 2518640 start.go:143] virtualization:  
	I1101 09:33:56.768770 2518640 out.go:179] * [newest-cni-124713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:33:56.773055 2518640 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:33:56.773124 2518640 notify.go:221] Checking for updates...
	I1101 09:33:56.779413 2518640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:33:56.782410 2518640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:56.785921 2518640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:33:56.788701 2518640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:33:56.791647 2518640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:33:56.794940 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:56.795452 2518640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:33:56.844367 2518640 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:33:56.844476 2518640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:56.938409 2518640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:33:56.925126567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:56.938511 2518640 docker.go:319] overlay module found
	I1101 09:33:56.942132 2518640 out.go:179] * Using the docker driver based on existing profile
	I1101 09:33:56.945094 2518640 start.go:309] selected driver: docker
	I1101 09:33:56.945111 2518640 start.go:930] validating driver "docker" against &{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:56.945208 2518640 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:33:56.945905 2518640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:57.059544 2518640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:33:57.047460642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:57.059910 2518640 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:33:57.059936 2518640 cni.go:84] Creating CNI manager for ""
	I1101 09:33:57.059987 2518640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:57.060026 2518640 start.go:353] cluster config:
	{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:57.063351 2518640 out.go:179] * Starting "newest-cni-124713" primary control-plane node in "newest-cni-124713" cluster
	I1101 09:33:57.066137 2518640 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:33:57.069052 2518640 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:33:57.071818 2518640 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:57.071896 2518640 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:33:57.071905 2518640 cache.go:59] Caching tarball of preloaded images
	I1101 09:33:57.071995 2518640 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:33:57.072004 2518640 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:33:57.072122 2518640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:57.072316 2518640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:33:57.098932 2518640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:33:57.098950 2518640 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:33:57.098962 2518640 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:33:57.098992 2518640 start.go:360] acquireMachinesLock for newest-cni-124713: {Name:mkc03165af37613c9c0e7f1c90ff2df91e2b25ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:33:57.099043 2518640 start.go:364] duration metric: took 33.788µs to acquireMachinesLock for "newest-cni-124713"
	I1101 09:33:57.099062 2518640 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:33:57.099067 2518640 fix.go:54] fixHost starting: 
	I1101 09:33:57.099328 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:57.121509 2518640 fix.go:112] recreateIfNeeded on newest-cni-124713: state=Stopped err=<nil>
	W1101 09:33:57.121536 2518640 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:33:56.053616 2516487 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:33:56.070397 2516487 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:33:56.074812 2516487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:56.084957 2516487 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:33:56.085098 2516487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:56.085166 2516487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:56.133321 2516487 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:56.133341 2516487 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:33:56.133394 2516487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:56.164010 2516487 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:56.164032 2516487 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:33:56.164040 2516487 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 09:33:56.164147 2516487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-703627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:33:56.164231 2516487 ssh_runner.go:195] Run: crio config
	I1101 09:33:56.239966 2516487 cni.go:84] Creating CNI manager for ""
	I1101 09:33:56.239986 2516487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:56.240006 2516487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:33:56.240029 2516487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-703627 NodeName:default-k8s-diff-port-703627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:33:56.240149 2516487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-703627"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:33:56.240210 2516487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:33:56.248653 2516487 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:33:56.248728 2516487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:33:56.276144 2516487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:33:56.321208 2516487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:33:56.334333 2516487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 09:33:56.358257 2516487 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:33:56.362511 2516487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:56.373246 2516487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:56.515098 2516487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:56.534683 2516487 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627 for IP: 192.168.85.2
	I1101 09:33:56.534704 2516487 certs.go:195] generating shared ca certs ...
	I1101 09:33:56.534719 2516487 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:56.534849 2516487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:33:56.534898 2516487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:33:56.534910 2516487 certs.go:257] generating profile certs ...
	I1101 09:33:56.535006 2516487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key
	I1101 09:33:56.535073 2516487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36
	I1101 09:33:56.535119 2516487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key
	I1101 09:33:56.535227 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:33:56.535258 2516487 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:33:56.535270 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:33:56.535298 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:33:56.535322 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:33:56.535347 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:33:56.535393 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:33:56.536241 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:33:56.561130 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:33:56.580229 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:33:56.599397 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:33:56.638752 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:33:56.667949 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:33:56.741049 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:33:56.785095 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:33:56.828169 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:33:56.858150 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:33:56.880845 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:33:56.908427 2516487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:33:56.930075 2516487 ssh_runner.go:195] Run: openssl version
	I1101 09:33:56.941103 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:33:56.950724 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.956339 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.956411 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.998844 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:33:57.008936 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:33:57.018797 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.023403 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.023472 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.073723 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:33:57.082417 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:33:57.091206 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.095540 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.095599 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.139155 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:33:57.154103 2516487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:33:57.160576 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:33:57.207620 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:33:57.274606 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:33:57.357349 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:33:57.462523 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:33:57.631264 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:33:57.798074 2516487 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:57.798164 2516487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:33:57.798242 2516487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:33:57.905206 2516487 cri.go:89] found id: "ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c"
	I1101 09:33:57.905239 2516487 cri.go:89] found id: "da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9"
	I1101 09:33:57.905244 2516487 cri.go:89] found id: "ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2"
	I1101 09:33:57.905247 2516487 cri.go:89] found id: "c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b"
	I1101 09:33:57.905250 2516487 cri.go:89] found id: ""
	I1101 09:33:57.905301 2516487 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:33:57.944565 2516487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:33:57.944738 2516487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:33:57.988233 2516487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:33:57.988249 2516487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:33:57.988297 2516487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:33:58.010448 2516487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:33:58.010905 2516487 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-703627" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:58.011024 2516487 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-703627" cluster setting kubeconfig missing "default-k8s-diff-port-703627" context setting]
	I1101 09:33:58.011308 2516487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.012987 2516487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:33:58.041795 2516487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:33:58.041832 2516487 kubeadm.go:602] duration metric: took 53.576656ms to restartPrimaryControlPlane
	I1101 09:33:58.041843 2516487 kubeadm.go:403] duration metric: took 243.781398ms to StartCluster
	I1101 09:33:58.041867 2516487 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.041949 2516487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:58.042658 2516487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.043106 2516487 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:58.043155 2516487 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:33:58.043215 2516487 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:33:58.043557 2516487 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.043576 2516487 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.043583 2516487 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:33:58.043608 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.043646 2516487 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.043663 2516487 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.043669 2516487 addons.go:248] addon dashboard should already be in state true
	I1101 09:33:58.043691 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.044099 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.044328 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.044636 2516487 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.044656 2516487 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-703627"
	I1101 09:33:58.044960 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.050236 2516487 out.go:179] * Verifying Kubernetes components...
	I1101 09:33:58.053824 2516487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:58.097917 2516487 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.097939 2516487 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:33:58.097963 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.098372 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.111148 2516487 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:33:58.114192 2516487 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:58.114212 2516487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:33:58.114275 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.147928 2516487 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:33:58.148105 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.153695 2516487 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:33:58.157309 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:33:58.157336 2516487 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:33:58.157409 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.168457 2516487 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:58.168485 2516487 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:33:58.168548 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.198931 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.213341 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.462759 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:58.485656 2516487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:58.490994 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:33:58.491062 2516487 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:33:58.522337 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:58.526888 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:33:58.526967 2516487 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:33:58.574746 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:33:58.574817 2516487 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:33:58.650205 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:33:58.650225 2516487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:33:58.693131 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:33:58.693151 2516487 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:33:57.124716 2518640 out.go:252] * Restarting existing docker container for "newest-cni-124713" ...
	I1101 09:33:57.124797 2518640 cli_runner.go:164] Run: docker start newest-cni-124713
	I1101 09:33:57.456628 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:57.486798 2518640 kic.go:430] container "newest-cni-124713" state is running.
	I1101 09:33:57.487195 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:33:57.518367 2518640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:57.518580 2518640 machine.go:94] provisionDockerMachine start ...
	I1101 09:33:57.518638 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:57.547272 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:57.547589 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:33:57.547599 2518640 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:33:57.548592 2518640 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52394->127.0.0.1:36380: read: connection reset by peer
	I1101 09:34:00.732222 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:34:00.732250 2518640 ubuntu.go:182] provisioning hostname "newest-cni-124713"
	I1101 09:34:00.732340 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:00.765473 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:00.765787 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:00.765805 2518640 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-124713 && echo "newest-cni-124713" | sudo tee /etc/hostname
	I1101 09:34:00.942684 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:34:00.942803 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:00.977631 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:00.977949 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:00.977974 2518640 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-124713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-124713/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-124713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:34:01.156776 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:34:01.156850 2518640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:34:01.156908 2518640 ubuntu.go:190] setting up certificates
	I1101 09:34:01.156935 2518640 provision.go:84] configureAuth start
	I1101 09:34:01.157022 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:34:01.215103 2518640 provision.go:143] copyHostCerts
	I1101 09:34:01.215188 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:34:01.215210 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:34:01.215300 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:34:01.215414 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:34:01.215427 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:34:01.215457 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:34:01.215528 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:34:01.215538 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:34:01.215563 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:34:01.215762 2518640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.newest-cni-124713 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-124713]
	I1101 09:33:58.731808 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:33:58.731830 2516487 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:33:58.749089 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:33:58.749109 2516487 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:33:58.766064 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:33:58.766130 2516487 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:33:58.783404 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:33:58.783470 2516487 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:33:58.805331 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:34:01.875679 2518640 provision.go:177] copyRemoteCerts
	I1101 09:34:01.875777 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:34:01.875836 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:01.895965 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:02.023266 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:34:02.061715 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:34:02.111426 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:34:02.148191 2518640 provision.go:87] duration metric: took 991.208198ms to configureAuth
	I1101 09:34:02.148231 2518640 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:34:02.148529 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:02.148798 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.181646 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:02.181968 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:02.181989 2518640 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:34:02.632246 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:34:02.632334 2518640 machine.go:97] duration metric: took 5.113743832s to provisionDockerMachine
	I1101 09:34:02.632367 2518640 start.go:293] postStartSetup for "newest-cni-124713" (driver="docker")
	I1101 09:34:02.632391 2518640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:34:02.632476 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:34:02.632539 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.681445 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:02.822180 2518640 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:34:02.828186 2518640 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:34:02.828220 2518640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:34:02.828232 2518640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:34:02.828290 2518640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:34:02.828373 2518640 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:34:02.828483 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:34:02.837920 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:02.876249 2518640 start.go:296] duration metric: took 243.854216ms for postStartSetup
	I1101 09:34:02.876366 2518640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:34:02.876423 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.909038 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.029684 2518640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:34:03.036597 2518640 fix.go:56] duration metric: took 5.937523034s for fixHost
	I1101 09:34:03.036624 2518640 start.go:83] releasing machines lock for "newest-cni-124713", held for 5.937573067s
	I1101 09:34:03.036706 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:34:03.068116 2518640 ssh_runner.go:195] Run: cat /version.json
	I1101 09:34:03.068156 2518640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:34:03.068171 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:03.068211 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:03.103993 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.104634 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.228748 2518640 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:03.347796 2518640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:34:03.420660 2518640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:34:03.425599 2518640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:34:03.425716 2518640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:34:03.433839 2518640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:34:03.433910 2518640 start.go:496] detecting cgroup driver to use...
	I1101 09:34:03.434002 2518640 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:34:03.434088 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:34:03.453479 2518640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:34:03.468921 2518640 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:34:03.469038 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:34:03.485035 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:34:03.508515 2518640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:34:03.687067 2518640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:34:03.869848 2518640 docker.go:234] disabling docker service ...
	I1101 09:34:03.869967 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:34:03.888391 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:34:03.902444 2518640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:34:04.052837 2518640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:34:04.239418 2518640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:34:04.263877 2518640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:34:04.293199 2518640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:34:04.293337 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.309250 2518640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:34:04.309365 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.322639 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.338356 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.354324 2518640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:34:04.369905 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.385319 2518640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.399502 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.410461 2518640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:34:04.421674 2518640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:34:04.431987 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:04.655813 2518640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:34:04.863830 2518640 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:34:04.864034 2518640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:34:04.869604 2518640 start.go:564] Will wait 60s for crictl version
	I1101 09:34:04.869712 2518640 ssh_runner.go:195] Run: which crictl
	I1101 09:34:04.874338 2518640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:34:04.918823 2518640 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:34:04.919002 2518640 ssh_runner.go:195] Run: crio --version
	I1101 09:34:04.969336 2518640 ssh_runner.go:195] Run: crio --version
	I1101 09:34:05.010524 2518640 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:34:05.013579 2518640 cli_runner.go:164] Run: docker network inspect newest-cni-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:05.033708 2518640 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:34:05.037890 2518640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:05.051153 2518640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:34:05.054056 2518640 kubeadm.go:884] updating cluster {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:34:05.054255 2518640 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:05.054377 2518640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:05.123580 2518640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:05.123657 2518640 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:34:05.123748 2518640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:05.169361 2518640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:05.169505 2518640 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:34:05.169528 2518640 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:34:05.169659 2518640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-124713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:34:05.169794 2518640 ssh_runner.go:195] Run: crio config
	I1101 09:34:05.274417 2518640 cni.go:84] Creating CNI manager for ""
	I1101 09:34:05.274528 2518640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:05.274587 2518640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:34:05.274630 2518640 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-124713 NodeName:newest-cni-124713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:34:05.274800 2518640 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-124713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:34:05.274906 2518640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:34:05.283771 2518640 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:34:05.283912 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:34:05.297652 2518640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:34:05.314255 2518640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:34:05.343893 2518640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 09:34:05.362979 2518640 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:34:05.367371 2518640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:05.386323 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:05.573804 2518640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:05.596900 2518640 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713 for IP: 192.168.76.2
	I1101 09:34:05.596984 2518640 certs.go:195] generating shared ca certs ...
	I1101 09:34:05.597017 2518640 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:05.597248 2518640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:34:05.597480 2518640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:34:05.597536 2518640 certs.go:257] generating profile certs ...
	I1101 09:34:05.597793 2518640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.key
	I1101 09:34:05.597976 2518640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe
	I1101 09:34:05.598169 2518640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key
	I1101 09:34:05.598398 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:34:05.598458 2518640 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:34:05.598484 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:34:05.598570 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:34:05.598626 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:34:05.598705 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:34:05.598793 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:05.599527 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:34:05.622522 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:34:05.653840 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:34:05.681007 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:34:05.717827 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:34:05.755207 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:34:05.808277 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:34:05.855461 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:34:05.915155 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:34:05.964400 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:34:06.037265 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:34:06.073811 2518640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:34:06.147161 2518640 ssh_runner.go:195] Run: openssl version
	I1101 09:34:06.156959 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:34:06.173228 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.177831 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.177942 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.244971 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:34:06.256965 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:34:06.269243 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.277134 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.277270 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.339356 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:34:06.348869 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:34:06.366402 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.373045 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.373170 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.418478 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:34:06.426246 2518640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:34:06.431340 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:34:06.482537 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:34:06.531532 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:34:06.580819 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:34:06.628375 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:34:06.690280 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:34:06.778925 2518640 kubeadm.go:401] StartCluster: {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:34:06.779067 2518640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:06.779160 2518640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:06.914995 2518640 cri.go:89] found id: ""
	I1101 09:34:06.915071 2518640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:34:06.928076 2518640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:34:06.928097 2518640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:34:06.928149 2518640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:34:06.966169 2518640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:34:06.966782 2518640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-124713" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:06.967037 2518640 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-124713" cluster setting kubeconfig missing "newest-cni-124713" context setting]
	I1101 09:34:06.967491 2518640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:06.969222 2518640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:34:07.005289 2518640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:34:07.005334 2518640 kubeadm.go:602] duration metric: took 77.229752ms to restartPrimaryControlPlane
	I1101 09:34:07.005345 2518640 kubeadm.go:403] duration metric: took 226.431069ms to StartCluster
	I1101 09:34:07.005366 2518640 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:07.005442 2518640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:07.006476 2518640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:07.006712 2518640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:34:07.007102 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:07.007079 2518640 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:34:07.007228 2518640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-124713"
	I1101 09:34:07.007246 2518640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-124713"
	W1101 09:34:07.007253 2518640 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:34:07.007287 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.007294 2518640 addons.go:70] Setting dashboard=true in profile "newest-cni-124713"
	I1101 09:34:07.007308 2518640 addons.go:239] Setting addon dashboard=true in "newest-cni-124713"
	W1101 09:34:07.007314 2518640 addons.go:248] addon dashboard should already be in state true
	I1101 09:34:07.007339 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.007743 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.008141 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.008273 2518640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-124713"
	I1101 09:34:07.008293 2518640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-124713"
	I1101 09:34:07.008572 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.011795 2518640 out.go:179] * Verifying Kubernetes components...
	I1101 09:34:07.022257 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:07.054705 2518640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:34:07.057562 2518640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:34:07.057582 2518640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:34:07.057654 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.073459 2518640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:34:07.076493 2518640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-124713"
	W1101 09:34:07.076514 2518640 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:34:07.076539 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.076989 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.093780 2518640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:34:07.670674 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.207822506s)
	I1101 09:34:07.670732 2516487 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.184994775s)
	I1101 09:34:07.670762 2516487 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-703627" to be "Ready" ...
	I1101 09:34:07.671061 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.148644965s)
	I1101 09:34:07.671338 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.865873485s)
	I1101 09:34:07.676945 2516487 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-703627 addons enable metrics-server
	
	I1101 09:34:07.712446 2516487 node_ready.go:49] node "default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:07.712483 2516487 node_ready.go:38] duration metric: took 41.704263ms for node "default-k8s-diff-port-703627" to be "Ready" ...
	I1101 09:34:07.712497 2516487 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:34:07.712561 2516487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:34:07.744227 2516487 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:34:07.745510 2516487 api_server.go:72] duration metric: took 9.702324006s to wait for apiserver process to appear ...
	I1101 09:34:07.745535 2516487 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:34:07.745555 2516487 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 09:34:07.748553 2516487 addons.go:515] duration metric: took 9.705328018s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:34:07.800374 2516487 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 09:34:07.804154 2516487 api_server.go:141] control plane version: v1.34.1
	I1101 09:34:07.804183 2516487 api_server.go:131] duration metric: took 58.641188ms to wait for apiserver health ...
	I1101 09:34:07.804193 2516487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:34:07.811001 2516487 system_pods.go:59] 9 kube-system pods found
	I1101 09:34:07.811045 2516487 system_pods.go:61] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.811055 2516487 system_pods.go:61] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.811065 2516487 system_pods.go:61] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:07.811071 2516487 system_pods.go:61] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:34:07.811079 2516487 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:07.811088 2516487 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:07.811094 2516487 system_pods.go:61] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:34:07.811105 2516487 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:07.811111 2516487 system_pods.go:61] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Running
	I1101 09:34:07.811121 2516487 system_pods.go:74] duration metric: took 6.922442ms to wait for pod list to return data ...
	I1101 09:34:07.811130 2516487 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:34:07.832356 2516487 default_sa.go:45] found service account: "default"
	I1101 09:34:07.832389 2516487 default_sa.go:55] duration metric: took 21.247509ms for default service account to be created ...
	I1101 09:34:07.832399 2516487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:34:07.840538 2516487 system_pods.go:86] 9 kube-system pods found
	I1101 09:34:07.840597 2516487 system_pods.go:89] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.840608 2516487 system_pods.go:89] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.840617 2516487 system_pods.go:89] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:07.840625 2516487 system_pods.go:89] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:34:07.840632 2516487 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:07.840645 2516487 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:07.840650 2516487 system_pods.go:89] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:34:07.840656 2516487 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:07.840681 2516487 system_pods.go:89] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Running
	I1101 09:34:07.840708 2516487 system_pods.go:126] duration metric: took 8.302424ms to wait for k8s-apps to be running ...
	I1101 09:34:07.840717 2516487 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:34:07.840783 2516487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:07.872000 2516487 system_svc.go:56] duration metric: took 31.273414ms WaitForService to wait for kubelet
	I1101 09:34:07.872038 2516487 kubeadm.go:587] duration metric: took 9.828853108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:34:07.872059 2516487 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:34:07.880218 2516487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:34:07.880252 2516487 node_conditions.go:123] node cpu capacity is 2
	I1101 09:34:07.880266 2516487 node_conditions.go:105] duration metric: took 8.200338ms to run NodePressure ...
	I1101 09:34:07.880278 2516487 start.go:242] waiting for startup goroutines ...
	I1101 09:34:07.880306 2516487 start.go:247] waiting for cluster config update ...
	I1101 09:34:07.880318 2516487 start.go:256] writing updated cluster config ...
	I1101 09:34:07.880638 2516487 ssh_runner.go:195] Run: rm -f paused
	I1101 09:34:07.887402 2516487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:34:07.909152 2516487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:07.097479 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:34:07.097506 2518640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:34:07.097582 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.125668 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.145500 2518640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:34:07.145523 2518640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:34:07.145583 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.160280 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.184091 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.497814 2518640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:07.531619 2518640 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:34:07.531697 2518640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:34:07.631445 2518640 api_server.go:72] duration metric: took 624.700243ms to wait for apiserver process to appear ...
	I1101 09:34:07.631472 2518640 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:34:07.631491 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:07.644283 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:34:07.650309 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:34:07.650330 2518640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:34:07.709727 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:34:07.748576 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:34:07.748655 2518640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:34:07.851748 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:34:07.851769 2518640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:34:07.975997 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:34:07.976016 2518640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:34:08.064223 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:34:08.064299 2518640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:34:08.229157 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:34:08.229229 2518640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:34:08.289097 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:34:08.289166 2518640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:34:08.307157 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:34:08.307227 2518640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:34:08.331528 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:34:08.331598 2518640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:34:08.360744 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 09:34:09.914714 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:12.417571 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:12.631916 2518640 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:34:12.632004 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.526543 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.526645 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:13.526674 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.781198 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.781267 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:13.781301 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.895928 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.896001 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:14.132428 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:14.298207 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:14.298287 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:14.632142 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:14.649814 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:14.649888 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:15.132119 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:15.205479 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:15.205515 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:15.631928 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:15.680613 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:34:15.710990 2518640 api_server.go:141] control plane version: v1.34.1
	I1101 09:34:15.711019 2518640 api_server.go:131] duration metric: took 8.079540658s to wait for apiserver health ...
	I1101 09:34:15.711036 2518640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:34:15.727554 2518640 system_pods.go:59] 8 kube-system pods found
	I1101 09:34:15.727592 2518640 system_pods.go:61] "coredns-66bc5c9577-qkv9l" [a2ef7fa8-3194-409f-a0f6-ece0ba2f87fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:34:15.727602 2518640 system_pods.go:61] "etcd-newest-cni-124713" [77c1f287-1fd4-4f3e-98c4-eff8afed33ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:15.727611 2518640 system_pods.go:61] "kindnet-4szq6" [dfa514f9-f59f-40fc-86c0-0005e842ee44] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:34:15.727622 2518640 system_pods.go:61] "kube-apiserver-newest-cni-124713" [8b1990b4-e307-4233-8887-5fb43000794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:15.727642 2518640 system_pods.go:61] "kube-controller-manager-newest-cni-124713" [78bce883-2129-458e-b59e-ff30b3aa124a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:15.727649 2518640 system_pods.go:61] "kube-proxy-b69rf" [0f001764-a3b7-4774-86b6-ab740da66ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:34:15.727655 2518640 system_pods.go:61] "kube-scheduler-newest-cni-124713" [eeea8f33-465e-40e5-a730-9edd13ae1d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:15.727665 2518640 system_pods.go:61] "storage-provisioner" [bdc61907-9695-405b-8300-5fd746e2180c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:34:15.727671 2518640 system_pods.go:74] duration metric: took 16.627527ms to wait for pod list to return data ...
	I1101 09:34:15.727685 2518640 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:34:15.730221 2518640 default_sa.go:45] found service account: "default"
	I1101 09:34:15.730280 2518640 default_sa.go:55] duration metric: took 2.588211ms for default service account to be created ...
	I1101 09:34:15.730309 2518640 kubeadm.go:587] duration metric: took 8.723565763s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:34:15.730338 2518640 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:34:15.740009 2518640 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:34:15.740044 2518640 node_conditions.go:123] node cpu capacity is 2
	I1101 09:34:15.740069 2518640 node_conditions.go:105] duration metric: took 9.694681ms to run NodePressure ...
	I1101 09:34:15.740083 2518640 start.go:242] waiting for startup goroutines ...
	I1101 09:34:16.473355 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.829026772s)
	I1101 09:34:16.473365 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.76361303s)
	I1101 09:34:16.473517 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.112677313s)
	I1101 09:34:16.478402 2518640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-124713 addons enable metrics-server
	
	I1101 09:34:16.483791 2518640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:34:16.487158 2518640 addons.go:515] duration metric: took 9.480071835s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:34:16.487246 2518640 start.go:247] waiting for cluster config update ...
	I1101 09:34:16.487275 2518640 start.go:256] writing updated cluster config ...
	I1101 09:34:16.487568 2518640 ssh_runner.go:195] Run: rm -f paused
	I1101 09:34:16.584992 2518640 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:34:16.588448 2518640 out.go:179] * Done! kubectl is now configured to use "newest-cni-124713" cluster and "default" namespace by default
	W1101 09:34:14.918030 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:17.417493 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.192383429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.199944778Z" level=info msg="Running pod sandbox: kube-system/kindnet-4szq6/POD" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.200185272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.238324505Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.243628752Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f464aab4-ef48-4b36-8504-2b6cb04c7e20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.260385817Z" level=info msg="Ran pod sandbox 1e7ac5f58085a996fb4b17f7d8af05a704861b59fcfdbcee844ea290fa4d547d with infra container: kube-system/kindnet-4szq6/POD" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.263016095Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62891047-bdb7-4bee-9f77-4f08c4b64728 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.264579572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8aed091c-b8a7-4261-8c8d-090b8854a915 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.26766418Z" level=info msg="Creating container: kube-system/kindnet-4szq6/kindnet-cni" id=56a84612-abd0-47a5-b9cc-1b90b518dcf1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.268176748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.283025312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.283600008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.286212769Z" level=info msg="Ran pod sandbox 74f10e9bc1fe7dcd3e8afb5fadaa740fa2a49fe3daa566f5b0e206f4a60cf259 with infra container: kube-system/kube-proxy-b69rf/POD" id=f464aab4-ef48-4b36-8504-2b6cb04c7e20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.289024408Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a4bd9bc0-b28d-46be-8709-a317c9f43028 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.293447883Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b40a9098-9f75-447b-9d4c-a0ca216f68cb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.296235686Z" level=info msg="Creating container: kube-system/kube-proxy-b69rf/kube-proxy" id=16ff0295-3372-44e3-83c2-2f68a7486787 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.296823543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.317408539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.352408225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.452901486Z" level=info msg="Created container 7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e: kube-system/kindnet-4szq6/kindnet-cni" id=56a84612-abd0-47a5-b9cc-1b90b518dcf1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.455202165Z" level=info msg="Starting container: 7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e" id=4cdec3dd-a119-4df2-8956-b52e624d5659 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.45982488Z" level=info msg="Created container 1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b: kube-system/kube-proxy-b69rf/kube-proxy" id=16ff0295-3372-44e3-83c2-2f68a7486787 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.461860696Z" level=info msg="Starting container: 1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b" id=a348768e-50c3-4c66-a44e-6ec0b5f5ffa6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.463130396Z" level=info msg="Started container" PID=1057 containerID=7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e description=kube-system/kindnet-4szq6/kindnet-cni id=4cdec3dd-a119-4df2-8956-b52e624d5659 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e7ac5f58085a996fb4b17f7d8af05a704861b59fcfdbcee844ea290fa4d547d
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.467684723Z" level=info msg="Started container" PID=1056 containerID=1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b description=kube-system/kube-proxy-b69rf/kube-proxy id=a348768e-50c3-4c66-a44e-6ec0b5f5ffa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74f10e9bc1fe7dcd3e8afb5fadaa740fa2a49fe3daa566f5b0e206f4a60cf259
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1c56f5f7c7738       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   74f10e9bc1fe7       kube-proxy-b69rf                            kube-system
	7aca1deb842a0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   1e7ac5f58085a       kindnet-4szq6                               kube-system
	a8cd9348cd8f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   dbc31cc58335e       kube-scheduler-newest-cni-124713            kube-system
	0055fe7d149aa       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   0ce602ef41e5d       kube-apiserver-newest-cni-124713            kube-system
	eafecd62f9287       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   a51ffdec24d3d       kube-controller-manager-newest-cni-124713   kube-system
	87351c6e0eb29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   3d95bff35f254       etcd-newest-cni-124713                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-124713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-124713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-124713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:33:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-124713
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-124713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                640263e5-c6bc-4077-95b9-66d3ed0270b1
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-124713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-4szq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-124713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-124713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-b69rf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-124713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-124713 event: Registered Node newest-cni-124713 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-124713 event: Registered Node newest-cni-124713 in Controller
	
	
	==> dmesg <==
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	[ +18.806441] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [87351c6e0eb295b8873ea951caef05b3e21649cfc05ef5547ec27729a4256b5c] <==
	{"level":"warn","ts":"2025-11-01T09:34:11.263538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.350454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.371528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.413301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.447275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.482997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.513914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.569505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.616421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.670563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.766480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.933816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.947385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.026469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.058850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.098655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.116125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.160032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.180019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.221135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.247258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.274681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.300423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.311806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.392376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:34:21 up 18:16,  0 user,  load average: 8.02, 4.52, 3.42
	Linux newest-cni-124713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e] <==
	I1101 09:34:15.584954       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:34:15.664297       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:34:15.664460       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:34:15.664506       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:34:15.664546       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:34:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:34:15.775082       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:34:15.775136       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:34:15.775146       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:34:15.776194       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0055fe7d149aaac0d9114c4fce265f810e5bcbdf0bac632b3be71cba9a166106] <==
	I1101 09:34:14.131048       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:34:14.133617       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:34:14.147285       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:34:14.154379       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:34:14.161702       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:34:14.162040       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:34:14.162051       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:34:14.162161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:34:14.180410       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:34:14.189389       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:34:14.203509       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:34:14.246549       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:34:14.268525       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:34:14.268816       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:34:14.356637       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:34:14.974349       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:34:15.902201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:34:15.995789       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:34:16.091891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:34:16.142131       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:34:16.314053       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.146.182"}
	I1101 09:34:16.346125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.79.13"}
	I1101 09:34:17.462623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:34:17.622037       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:34:17.659137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [eafecd62f9287b62383e0e205bf02befe8b40647500ac451c7a98c4b9d33b883] <==
	I1101 09:34:17.262357       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:34:17.262663       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:34:17.262726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:34:17.262880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:34:17.262948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:34:17.262998       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:34:17.265538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:34:17.266043       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:34:17.266268       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:34:17.269967       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:34:17.270028       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:34:17.270076       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:34:17.270441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:34:17.271072       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:34:17.271205       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:34:17.289221       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:34:17.317614       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:34:17.319256       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-124713"
	I1101 09:34:17.319376       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:34:17.319577       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:34:17.321504       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:34:17.365697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:17.378665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:17.378749       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:34:17.378805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b] <==
	I1101 09:34:15.975394       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:34:16.266667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:34:16.368766       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:34:16.368799       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:34:16.377079       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:34:16.474920       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:34:16.475964       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:34:16.492127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:34:16.493313       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:34:16.493374       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:16.524890       1 config.go:200] "Starting service config controller"
	I1101 09:34:16.524973       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:34:16.525016       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:34:16.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:34:16.525095       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:34:16.525123       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:34:16.529023       1 config.go:309] "Starting node config controller"
	I1101 09:34:16.529120       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:34:16.529152       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:34:16.625907       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:34:16.625943       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:34:16.626010       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a8cd9348cd8f6c0612fd1783c7194b83214ace4b8f1e42197ad6df9e56662e12] <==
	I1101 09:34:09.824553       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:34:15.559338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:34:15.559368       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:15.570767       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:34:15.570801       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:34:15.570831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:15.570838       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:15.570860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.570866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.583955       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:34:15.585468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:34:15.674263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.674351       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:34:15.674456       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:34:11 newest-cni-124713 kubelet[723]: E1101 09:34:11.285674     723 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-124713\" not found" node="newest-cni-124713"
	Nov 01 09:34:12 newest-cni-124713 kubelet[723]: E1101 09:34:12.469550     723 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-124713\" not found" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.033223     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.300046     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-124713\" already exists" pod="kube-system/etcd-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.300080     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.315789     723 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.315964     723 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.316006     723 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.317007     723 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.398269     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-124713\" already exists" pod="kube-system/kube-apiserver-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.398304     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.478622     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-124713\" already exists" pod="kube-system/kube-controller-manager-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.478667     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.506386     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-124713\" already exists" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.868930     723 apiserver.go:52] "Watching apiserver"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.929902     723 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965236     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-cni-cfg\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965287     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-xtables-lock\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965309     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-lib-modules\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965336     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-lib-modules\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965379     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-xtables-lock\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:34:15 newest-cni-124713 kubelet[723]: I1101 09:34:15.030697     723 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124713 -n newest-cni-124713: exit status 2 (532.503907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-124713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm: exit status 1 (102.600814ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qkv9l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jxmg4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mqllm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-124713
helpers_test.go:243: (dbg) docker inspect newest-cni-124713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	        "Created": "2025-11-01T09:33:17.58430684Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2518846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:33:57.161451023Z",
	            "FinishedAt": "2025-11-01T09:33:56.121072471Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728/d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728-json.log",
	        "Name": "/newest-cni-124713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-124713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-124713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1b820a5201faa8d9964727187addaaa218935f7dd7e8a43484ca4d1526e7728",
	                "LowerDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b21a768a0c8792bc24ff211492b34b7cdaf559a9b39b08bd8baef77073b5397/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-124713",
	                "Source": "/var/lib/docker/volumes/newest-cni-124713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-124713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-124713",
	                "name.minikube.sigs.k8s.io": "newest-cni-124713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b27bcb4cf8bd34929941a4d72b503a2749c1e431d0025f48b8fe4c6bf39edc16",
	            "SandboxKey": "/var/run/docker/netns/b27bcb4cf8bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36380"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36381"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36384"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36382"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36383"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-124713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:7f:c5:84:1b:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbec865c42edd0a01496f38885614d185989b6c702231d2c3f85ce55dc4aabc5",
	                    "EndpointID": "4af02415245de5e4e87d20bc0dfbbf9ec0ffc0ef8cdc57122e7d98ba20e93d5c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-124713",
	                        "d1b820a5201f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713: exit status 2 (470.90199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-124713 logs -n 25: (1.228609012s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-312549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ stop    │ -p embed-certs-312549 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ image   │ no-preload-357229 image list --format=json                                                                                                                                                                                                    │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:32 UTC │
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-703627 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p newest-cni-124713 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-124713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ image   │ newest-cni-124713 image list --format=json                                                                                                                                                                                                    │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p newest-cni-124713 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:33:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:33:56.762708 2518640 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:33:56.762923 2518640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:56.762946 2518640 out.go:374] Setting ErrFile to fd 2...
	I1101 09:33:56.762964 2518640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:33:56.763244 2518640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:33:56.763674 2518640 out.go:368] Setting JSON to false
	I1101 09:33:56.764635 2518640 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65783,"bootTime":1761923854,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:33:56.764738 2518640 start.go:143] virtualization:  
	I1101 09:33:56.768770 2518640 out.go:179] * [newest-cni-124713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:33:56.773055 2518640 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:33:56.773124 2518640 notify.go:221] Checking for updates...
	I1101 09:33:56.779413 2518640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:33:56.782410 2518640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:56.785921 2518640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:33:56.788701 2518640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:33:56.791647 2518640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:33:56.794940 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:56.795452 2518640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:33:56.844367 2518640 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:33:56.844476 2518640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:56.938409 2518640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:33:56.925126567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:56.938511 2518640 docker.go:319] overlay module found
	I1101 09:33:56.942132 2518640 out.go:179] * Using the docker driver based on existing profile
	I1101 09:33:56.945094 2518640 start.go:309] selected driver: docker
	I1101 09:33:56.945111 2518640 start.go:930] validating driver "docker" against &{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:56.945208 2518640 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:33:56.945905 2518640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:33:57.059544 2518640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 09:33:57.047460642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:33:57.059910 2518640 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:33:57.059936 2518640 cni.go:84] Creating CNI manager for ""
	I1101 09:33:57.059987 2518640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:57.060026 2518640 start.go:353] cluster config:
	{Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:57.063351 2518640 out.go:179] * Starting "newest-cni-124713" primary control-plane node in "newest-cni-124713" cluster
	I1101 09:33:57.066137 2518640 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:33:57.069052 2518640 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:33:57.071818 2518640 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:57.071896 2518640 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:33:57.071905 2518640 cache.go:59] Caching tarball of preloaded images
	I1101 09:33:57.071995 2518640 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:33:57.072004 2518640 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:33:57.072122 2518640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:57.072316 2518640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:33:57.098932 2518640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:33:57.098950 2518640 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:33:57.098962 2518640 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:33:57.098992 2518640 start.go:360] acquireMachinesLock for newest-cni-124713: {Name:mkc03165af37613c9c0e7f1c90ff2df91e2b25ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:33:57.099043 2518640 start.go:364] duration metric: took 33.788µs to acquireMachinesLock for "newest-cni-124713"
	I1101 09:33:57.099062 2518640 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:33:57.099067 2518640 fix.go:54] fixHost starting: 
	I1101 09:33:57.099328 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:57.121509 2518640 fix.go:112] recreateIfNeeded on newest-cni-124713: state=Stopped err=<nil>
	W1101 09:33:57.121536 2518640 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:33:56.053616 2516487 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-703627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:33:56.070397 2516487 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:33:56.074812 2516487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:56.084957 2516487 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:33:56.085098 2516487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:33:56.085166 2516487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:56.133321 2516487 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:56.133341 2516487 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:33:56.133394 2516487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:33:56.164010 2516487 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:33:56.164032 2516487 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:33:56.164040 2516487 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1101 09:33:56.164147 2516487 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-703627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:33:56.164231 2516487 ssh_runner.go:195] Run: crio config
	I1101 09:33:56.239966 2516487 cni.go:84] Creating CNI manager for ""
	I1101 09:33:56.239986 2516487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:33:56.240006 2516487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:33:56.240029 2516487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-703627 NodeName:default-k8s-diff-port-703627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:33:56.240149 2516487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-703627"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:33:56.240210 2516487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:33:56.248653 2516487 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:33:56.248728 2516487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:33:56.276144 2516487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:33:56.321208 2516487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:33:56.334333 2516487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 09:33:56.358257 2516487 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:33:56.362511 2516487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:33:56.373246 2516487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:56.515098 2516487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:56.534683 2516487 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627 for IP: 192.168.85.2
	I1101 09:33:56.534704 2516487 certs.go:195] generating shared ca certs ...
	I1101 09:33:56.534719 2516487 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:56.534849 2516487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:33:56.534898 2516487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:33:56.534910 2516487 certs.go:257] generating profile certs ...
	I1101 09:33:56.535006 2516487 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.key
	I1101 09:33:56.535073 2516487 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key.3f1ecf36
	I1101 09:33:56.535119 2516487 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key
	I1101 09:33:56.535227 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:33:56.535258 2516487 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:33:56.535270 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:33:56.535298 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:33:56.535322 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:33:56.535347 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:33:56.535393 2516487 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:33:56.536241 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:33:56.561130 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:33:56.580229 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:33:56.599397 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:33:56.638752 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:33:56.667949 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:33:56.741049 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:33:56.785095 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:33:56.828169 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:33:56.858150 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:33:56.880845 2516487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:33:56.908427 2516487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:33:56.930075 2516487 ssh_runner.go:195] Run: openssl version
	I1101 09:33:56.941103 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:33:56.950724 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.956339 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.956411 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:33:56.998844 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:33:57.008936 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:33:57.018797 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.023403 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.023472 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:33:57.073723 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:33:57.082417 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:33:57.091206 2516487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.095540 2516487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.095599 2516487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:33:57.139155 2516487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:33:57.154103 2516487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:33:57.160576 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:33:57.207620 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:33:57.274606 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:33:57.357349 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:33:57.462523 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:33:57.631264 2516487 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:33:57.798074 2516487 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-703627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-703627 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:33:57.798164 2516487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:33:57.798242 2516487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:33:57.905206 2516487 cri.go:89] found id: "ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c"
	I1101 09:33:57.905239 2516487 cri.go:89] found id: "da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9"
	I1101 09:33:57.905244 2516487 cri.go:89] found id: "ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2"
	I1101 09:33:57.905247 2516487 cri.go:89] found id: "c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b"
	I1101 09:33:57.905250 2516487 cri.go:89] found id: ""
	I1101 09:33:57.905301 2516487 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:33:57.944565 2516487 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:33:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:33:57.944738 2516487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:33:57.988233 2516487 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:33:57.988249 2516487 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:33:57.988297 2516487 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:33:58.010448 2516487 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:33:58.010905 2516487 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-703627" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:58.011024 2516487 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-703627" cluster setting kubeconfig missing "default-k8s-diff-port-703627" context setting]
	I1101 09:33:58.011308 2516487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.012987 2516487 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:33:58.041795 2516487 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:33:58.041832 2516487 kubeadm.go:602] duration metric: took 53.576656ms to restartPrimaryControlPlane
	I1101 09:33:58.041843 2516487 kubeadm.go:403] duration metric: took 243.781398ms to StartCluster
	I1101 09:33:58.041867 2516487 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.041949 2516487 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:33:58.042658 2516487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:33:58.043106 2516487 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:33:58.043155 2516487 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:33:58.043215 2516487 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:33:58.043557 2516487 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.043576 2516487 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.043583 2516487 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:33:58.043608 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.043646 2516487 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.043663 2516487 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.043669 2516487 addons.go:248] addon dashboard should already be in state true
	I1101 09:33:58.043691 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.044099 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.044328 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.044636 2516487 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-703627"
	I1101 09:33:58.044656 2516487 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-703627"
	I1101 09:33:58.044960 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.050236 2516487 out.go:179] * Verifying Kubernetes components...
	I1101 09:33:58.053824 2516487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:33:58.097917 2516487 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-703627"
	W1101 09:33:58.097939 2516487 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:33:58.097963 2516487 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:33:58.098372 2516487 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:33:58.111148 2516487 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:33:58.114192 2516487 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:58.114212 2516487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:33:58.114275 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.147928 2516487 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:33:58.148105 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.153695 2516487 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:33:58.157309 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:33:58.157336 2516487 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:33:58.157409 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.168457 2516487 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:58.168485 2516487 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:33:58.168548 2516487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:33:58.198931 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.213341 2516487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:33:58.462759 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:33:58.485656 2516487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:33:58.490994 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:33:58.491062 2516487 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:33:58.522337 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:33:58.526888 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:33:58.526967 2516487 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:33:58.574746 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:33:58.574817 2516487 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:33:58.650205 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:33:58.650225 2516487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:33:58.693131 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:33:58.693151 2516487 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:33:57.124716 2518640 out.go:252] * Restarting existing docker container for "newest-cni-124713" ...
	I1101 09:33:57.124797 2518640 cli_runner.go:164] Run: docker start newest-cni-124713
	I1101 09:33:57.456628 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:33:57.486798 2518640 kic.go:430] container "newest-cni-124713" state is running.
	I1101 09:33:57.487195 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:33:57.518367 2518640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/config.json ...
	I1101 09:33:57.518580 2518640 machine.go:94] provisionDockerMachine start ...
	I1101 09:33:57.518638 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:33:57.547272 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:33:57.547589 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:33:57.547599 2518640 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:33:57.548592 2518640 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52394->127.0.0.1:36380: read: connection reset by peer
	I1101 09:34:00.732222 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:34:00.732250 2518640 ubuntu.go:182] provisioning hostname "newest-cni-124713"
	I1101 09:34:00.732340 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:00.765473 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:00.765787 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:00.765805 2518640 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-124713 && echo "newest-cni-124713" | sudo tee /etc/hostname
	I1101 09:34:00.942684 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-124713
	
	I1101 09:34:00.942803 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:00.977631 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:00.977949 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:00.977974 2518640 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-124713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-124713/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-124713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:34:01.156776 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:34:01.156850 2518640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:34:01.156908 2518640 ubuntu.go:190] setting up certificates
	I1101 09:34:01.156935 2518640 provision.go:84] configureAuth start
	I1101 09:34:01.157022 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:34:01.215103 2518640 provision.go:143] copyHostCerts
	I1101 09:34:01.215188 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:34:01.215210 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:34:01.215300 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:34:01.215414 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:34:01.215427 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:34:01.215457 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:34:01.215528 2518640 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:34:01.215538 2518640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:34:01.215563 2518640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:34:01.215762 2518640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.newest-cni-124713 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-124713]
	I1101 09:33:58.731808 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:33:58.731830 2516487 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:33:58.749089 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:33:58.749109 2516487 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:33:58.766064 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:33:58.766130 2516487 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:33:58.783404 2516487 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:33:58.783470 2516487 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:33:58.805331 2516487 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:34:01.875679 2518640 provision.go:177] copyRemoteCerts
	I1101 09:34:01.875777 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:34:01.875836 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:01.895965 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:02.023266 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:34:02.061715 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:34:02.111426 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:34:02.148191 2518640 provision.go:87] duration metric: took 991.208198ms to configureAuth
	I1101 09:34:02.148231 2518640 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:34:02.148529 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:02.148798 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.181646 2518640 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:02.181968 2518640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36380 <nil> <nil>}
	I1101 09:34:02.181989 2518640 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:34:02.632246 2518640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:34:02.632334 2518640 machine.go:97] duration metric: took 5.113743832s to provisionDockerMachine
	I1101 09:34:02.632367 2518640 start.go:293] postStartSetup for "newest-cni-124713" (driver="docker")
	I1101 09:34:02.632391 2518640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:34:02.632476 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:34:02.632539 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.681445 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:02.822180 2518640 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:34:02.828186 2518640 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:34:02.828220 2518640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:34:02.828232 2518640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:34:02.828290 2518640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:34:02.828373 2518640 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:34:02.828483 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:34:02.837920 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:02.876249 2518640 start.go:296] duration metric: took 243.854216ms for postStartSetup
	I1101 09:34:02.876366 2518640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:34:02.876423 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:02.909038 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.029684 2518640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:34:03.036597 2518640 fix.go:56] duration metric: took 5.937523034s for fixHost
	I1101 09:34:03.036624 2518640 start.go:83] releasing machines lock for "newest-cni-124713", held for 5.937573067s
	I1101 09:34:03.036706 2518640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-124713
	I1101 09:34:03.068116 2518640 ssh_runner.go:195] Run: cat /version.json
	I1101 09:34:03.068156 2518640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:34:03.068171 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:03.068211 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:03.103993 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.104634 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:03.228748 2518640 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:03.347796 2518640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:34:03.420660 2518640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:34:03.425599 2518640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:34:03.425716 2518640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:34:03.433839 2518640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:34:03.433910 2518640 start.go:496] detecting cgroup driver to use...
	I1101 09:34:03.434002 2518640 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:34:03.434088 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:34:03.453479 2518640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:34:03.468921 2518640 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:34:03.469038 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:34:03.485035 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:34:03.508515 2518640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:34:03.687067 2518640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:34:03.869848 2518640 docker.go:234] disabling docker service ...
	I1101 09:34:03.869967 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:34:03.888391 2518640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:34:03.902444 2518640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:34:04.052837 2518640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:34:04.239418 2518640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:34:04.263877 2518640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:34:04.293199 2518640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:34:04.293337 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.309250 2518640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:34:04.309365 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.322639 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.338356 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.354324 2518640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:34:04.369905 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.385319 2518640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.399502 2518640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:04.410461 2518640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:34:04.421674 2518640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:34:04.431987 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:04.655813 2518640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:34:04.863830 2518640 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:34:04.864034 2518640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:34:04.869604 2518640 start.go:564] Will wait 60s for crictl version
	I1101 09:34:04.869712 2518640 ssh_runner.go:195] Run: which crictl
	I1101 09:34:04.874338 2518640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:34:04.918823 2518640 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:34:04.919002 2518640 ssh_runner.go:195] Run: crio --version
	I1101 09:34:04.969336 2518640 ssh_runner.go:195] Run: crio --version
	I1101 09:34:05.010524 2518640 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:34:05.013579 2518640 cli_runner.go:164] Run: docker network inspect newest-cni-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:05.033708 2518640 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:34:05.037890 2518640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:05.051153 2518640 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:34:05.054056 2518640 kubeadm.go:884] updating cluster {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:34:05.054255 2518640 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:05.054377 2518640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:05.123580 2518640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:05.123657 2518640 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:34:05.123748 2518640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:05.169361 2518640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:05.169505 2518640 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:34:05.169528 2518640 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:34:05.169659 2518640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-124713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:34:05.169794 2518640 ssh_runner.go:195] Run: crio config
	I1101 09:34:05.274417 2518640 cni.go:84] Creating CNI manager for ""
	I1101 09:34:05.274528 2518640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:05.274587 2518640 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:34:05.274630 2518640 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-124713 NodeName:newest-cni-124713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:34:05.274800 2518640 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-124713"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:34:05.274906 2518640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:34:05.283771 2518640 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:34:05.283912 2518640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:34:05.297652 2518640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:34:05.314255 2518640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:34:05.343893 2518640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1101 09:34:05.362979 2518640 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:34:05.367371 2518640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:05.386323 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:05.573804 2518640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:05.596900 2518640 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713 for IP: 192.168.76.2
	I1101 09:34:05.596984 2518640 certs.go:195] generating shared ca certs ...
	I1101 09:34:05.597017 2518640 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:05.597248 2518640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:34:05.597480 2518640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:34:05.597536 2518640 certs.go:257] generating profile certs ...
	I1101 09:34:05.597793 2518640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/client.key
	I1101 09:34:05.597976 2518640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key.7e7354fe
	I1101 09:34:05.598169 2518640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key
	I1101 09:34:05.598398 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:34:05.598458 2518640 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:34:05.598484 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:34:05.598570 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:34:05.598626 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:34:05.598705 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:34:05.598793 2518640 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:05.599527 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:34:05.622522 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:34:05.653840 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:34:05.681007 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:34:05.717827 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:34:05.755207 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:34:05.808277 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:34:05.855461 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/newest-cni-124713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:34:05.915155 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:34:05.964400 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:34:06.037265 2518640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:34:06.073811 2518640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:34:06.147161 2518640 ssh_runner.go:195] Run: openssl version
	I1101 09:34:06.156959 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:34:06.173228 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.177831 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.177942 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:34:06.244971 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:34:06.256965 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:34:06.269243 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.277134 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.277270 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:06.339356 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:34:06.348869 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:34:06.366402 2518640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.373045 2518640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.373170 2518640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:34:06.418478 2518640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:34:06.426246 2518640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:34:06.431340 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:34:06.482537 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:34:06.531532 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:34:06.580819 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:34:06.628375 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:34:06.690280 2518640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:34:06.778925 2518640 kubeadm.go:401] StartCluster: {Name:newest-cni-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-124713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:34:06.779067 2518640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:06.779160 2518640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:06.914995 2518640 cri.go:89] found id: ""
	I1101 09:34:06.915071 2518640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:34:06.928076 2518640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:34:06.928097 2518640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:34:06.928149 2518640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:34:06.966169 2518640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:34:06.966782 2518640 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-124713" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:06.967037 2518640 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-2314135/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-124713" cluster setting kubeconfig missing "newest-cni-124713" context setting]
	I1101 09:34:06.967491 2518640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:06.969222 2518640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:34:07.005289 2518640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:34:07.005334 2518640 kubeadm.go:602] duration metric: took 77.229752ms to restartPrimaryControlPlane
	I1101 09:34:07.005345 2518640 kubeadm.go:403] duration metric: took 226.431069ms to StartCluster
	I1101 09:34:07.005366 2518640 settings.go:142] acquiring lock: {Name:mka73a3765cb6575d4abe38a6ae3325222684786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:07.005442 2518640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:07.006476 2518640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/kubeconfig: {Name:mk53329368b7306829f4e47471838b13e1e36d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:07.006712 2518640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:34:07.007102 2518640 config.go:182] Loaded profile config "newest-cni-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:07.007079 2518640 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:34:07.007228 2518640 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-124713"
	I1101 09:34:07.007246 2518640 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-124713"
	W1101 09:34:07.007253 2518640 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:34:07.007287 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.007294 2518640 addons.go:70] Setting dashboard=true in profile "newest-cni-124713"
	I1101 09:34:07.007308 2518640 addons.go:239] Setting addon dashboard=true in "newest-cni-124713"
	W1101 09:34:07.007314 2518640 addons.go:248] addon dashboard should already be in state true
	I1101 09:34:07.007339 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.007743 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.008141 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.008273 2518640 addons.go:70] Setting default-storageclass=true in profile "newest-cni-124713"
	I1101 09:34:07.008293 2518640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-124713"
	I1101 09:34:07.008572 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.011795 2518640 out.go:179] * Verifying Kubernetes components...
	I1101 09:34:07.022257 2518640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:07.054705 2518640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:34:07.057562 2518640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:34:07.057582 2518640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:34:07.057654 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.073459 2518640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:34:07.076493 2518640 addons.go:239] Setting addon default-storageclass=true in "newest-cni-124713"
	W1101 09:34:07.076514 2518640 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:34:07.076539 2518640 host.go:66] Checking if "newest-cni-124713" exists ...
	I1101 09:34:07.076989 2518640 cli_runner.go:164] Run: docker container inspect newest-cni-124713 --format={{.State.Status}}
	I1101 09:34:07.093780 2518640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:34:07.670674 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.207822506s)
	I1101 09:34:07.670732 2516487 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.184994775s)
	I1101 09:34:07.670762 2516487 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-703627" to be "Ready" ...
	I1101 09:34:07.671061 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.148644965s)
	I1101 09:34:07.671338 2516487 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.865873485s)
	I1101 09:34:07.676945 2516487 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-703627 addons enable metrics-server
	
	I1101 09:34:07.712446 2516487 node_ready.go:49] node "default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:07.712483 2516487 node_ready.go:38] duration metric: took 41.704263ms for node "default-k8s-diff-port-703627" to be "Ready" ...
	I1101 09:34:07.712497 2516487 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:34:07.712561 2516487 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:34:07.744227 2516487 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:34:07.745510 2516487 api_server.go:72] duration metric: took 9.702324006s to wait for apiserver process to appear ...
	I1101 09:34:07.745535 2516487 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:34:07.745555 2516487 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1101 09:34:07.748553 2516487 addons.go:515] duration metric: took 9.705328018s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:34:07.800374 2516487 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1101 09:34:07.804154 2516487 api_server.go:141] control plane version: v1.34.1
	I1101 09:34:07.804183 2516487 api_server.go:131] duration metric: took 58.641188ms to wait for apiserver health ...
	I1101 09:34:07.804193 2516487 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:34:07.811001 2516487 system_pods.go:59] 9 kube-system pods found
	I1101 09:34:07.811045 2516487 system_pods.go:61] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.811055 2516487 system_pods.go:61] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.811065 2516487 system_pods.go:61] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:07.811071 2516487 system_pods.go:61] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:34:07.811079 2516487 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:07.811088 2516487 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:07.811094 2516487 system_pods.go:61] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:34:07.811105 2516487 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:07.811111 2516487 system_pods.go:61] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Running
	I1101 09:34:07.811121 2516487 system_pods.go:74] duration metric: took 6.922442ms to wait for pod list to return data ...
	I1101 09:34:07.811130 2516487 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:34:07.832356 2516487 default_sa.go:45] found service account: "default"
	I1101 09:34:07.832389 2516487 default_sa.go:55] duration metric: took 21.247509ms for default service account to be created ...
	I1101 09:34:07.832399 2516487 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:34:07.840538 2516487 system_pods.go:86] 9 kube-system pods found
	I1101 09:34:07.840597 2516487 system_pods.go:89] "coredns-66bc5c9577-7hh2n" [27a206c0-1b3c-477f-a1c8-63a1f5c04dac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.840608 2516487 system_pods.go:89] "coredns-66bc5c9577-mbmf5" [d919bbe5-a51f-497a-ae3b-e76e42dfb5c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:34:07.840617 2516487 system_pods.go:89] "etcd-default-k8s-diff-port-703627" [ee4635c2-2a7e-4940-a911-a6776fb4bf06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:07.840625 2516487 system_pods.go:89] "kindnet-td2vz" [b0d693ff-55a9-4906-891d-28f7d9849789] Running
	I1101 09:34:07.840632 2516487 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-703627" [6547f2f4-7d33-4b6b-b603-720e901c4f38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:07.840645 2516487 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-703627" [7d330496-b41b-4395-8c59-fdfcfc6043fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:07.840650 2516487 system_pods.go:89] "kube-proxy-6lwj9" [f48fe986-0db5-425e-a988-0396b9bd45a8] Running
	I1101 09:34:07.840656 2516487 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-703627" [baf327b2-0afe-4ed0-bff5-1c4d1d5e4e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:07.840681 2516487 system_pods.go:89] "storage-provisioner" [102037a1-7d8b-49cc-9a86-be75b4bfdcfe] Running
	I1101 09:34:07.840708 2516487 system_pods.go:126] duration metric: took 8.302424ms to wait for k8s-apps to be running ...
	I1101 09:34:07.840717 2516487 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:34:07.840783 2516487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:07.872000 2516487 system_svc.go:56] duration metric: took 31.273414ms WaitForService to wait for kubelet
	I1101 09:34:07.872038 2516487 kubeadm.go:587] duration metric: took 9.828853108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:34:07.872059 2516487 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:34:07.880218 2516487 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:34:07.880252 2516487 node_conditions.go:123] node cpu capacity is 2
	I1101 09:34:07.880266 2516487 node_conditions.go:105] duration metric: took 8.200338ms to run NodePressure ...
	I1101 09:34:07.880278 2516487 start.go:242] waiting for startup goroutines ...
	I1101 09:34:07.880306 2516487 start.go:247] waiting for cluster config update ...
	I1101 09:34:07.880318 2516487 start.go:256] writing updated cluster config ...
	I1101 09:34:07.880638 2516487 ssh_runner.go:195] Run: rm -f paused
	I1101 09:34:07.887402 2516487 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:34:07.909152 2516487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:07.097479 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:34:07.097506 2518640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:34:07.097582 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.125668 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.145500 2518640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:34:07.145523 2518640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:34:07.145583 2518640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-124713
	I1101 09:34:07.160280 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.184091 2518640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36380 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/newest-cni-124713/id_rsa Username:docker}
	I1101 09:34:07.497814 2518640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:07.531619 2518640 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:34:07.531697 2518640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:34:07.631445 2518640 api_server.go:72] duration metric: took 624.700243ms to wait for apiserver process to appear ...
	I1101 09:34:07.631472 2518640 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:34:07.631491 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:07.644283 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:34:07.650309 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:34:07.650330 2518640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:34:07.709727 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:34:07.748576 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:34:07.748655 2518640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:34:07.851748 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:34:07.851769 2518640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:34:07.975997 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:34:07.976016 2518640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:34:08.064223 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:34:08.064299 2518640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:34:08.229157 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:34:08.229229 2518640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:34:08.289097 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:34:08.289166 2518640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:34:08.307157 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:34:08.307227 2518640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:34:08.331528 2518640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:34:08.331598 2518640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:34:08.360744 2518640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 09:34:09.914714 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:12.417571 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:12.631916 2518640 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:34:12.632004 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.526543 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.526645 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:13.526674 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.781198 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.781267 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:13.781301 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:13.895928 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:34:13.896001 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:34:14.132428 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:14.298207 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:14.298287 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:14.632142 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:14.649814 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:14.649888 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:15.132119 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:15.205479 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:34:15.205515 2518640 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:34:15.631928 2518640 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:34:15.680613 2518640 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:34:15.710990 2518640 api_server.go:141] control plane version: v1.34.1
	I1101 09:34:15.711019 2518640 api_server.go:131] duration metric: took 8.079540658s to wait for apiserver health ...
	I1101 09:34:15.711036 2518640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:34:15.727554 2518640 system_pods.go:59] 8 kube-system pods found
	I1101 09:34:15.727592 2518640 system_pods.go:61] "coredns-66bc5c9577-qkv9l" [a2ef7fa8-3194-409f-a0f6-ece0ba2f87fd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:34:15.727602 2518640 system_pods.go:61] "etcd-newest-cni-124713" [77c1f287-1fd4-4f3e-98c4-eff8afed33ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:34:15.727611 2518640 system_pods.go:61] "kindnet-4szq6" [dfa514f9-f59f-40fc-86c0-0005e842ee44] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:34:15.727622 2518640 system_pods.go:61] "kube-apiserver-newest-cni-124713" [8b1990b4-e307-4233-8887-5fb43000794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:34:15.727642 2518640 system_pods.go:61] "kube-controller-manager-newest-cni-124713" [78bce883-2129-458e-b59e-ff30b3aa124a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:34:15.727649 2518640 system_pods.go:61] "kube-proxy-b69rf" [0f001764-a3b7-4774-86b6-ab740da66ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:34:15.727655 2518640 system_pods.go:61] "kube-scheduler-newest-cni-124713" [eeea8f33-465e-40e5-a730-9edd13ae1d26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:34:15.727665 2518640 system_pods.go:61] "storage-provisioner" [bdc61907-9695-405b-8300-5fd746e2180c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:34:15.727671 2518640 system_pods.go:74] duration metric: took 16.627527ms to wait for pod list to return data ...
	I1101 09:34:15.727685 2518640 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:34:15.730221 2518640 default_sa.go:45] found service account: "default"
	I1101 09:34:15.730280 2518640 default_sa.go:55] duration metric: took 2.588211ms for default service account to be created ...
	I1101 09:34:15.730309 2518640 kubeadm.go:587] duration metric: took 8.723565763s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:34:15.730338 2518640 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:34:15.740009 2518640 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 09:34:15.740044 2518640 node_conditions.go:123] node cpu capacity is 2
	I1101 09:34:15.740069 2518640 node_conditions.go:105] duration metric: took 9.694681ms to run NodePressure ...
	I1101 09:34:15.740083 2518640 start.go:242] waiting for startup goroutines ...
	I1101 09:34:16.473355 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.829026772s)
	I1101 09:34:16.473365 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.76361303s)
	I1101 09:34:16.473517 2518640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.112677313s)
	I1101 09:34:16.478402 2518640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-124713 addons enable metrics-server
	
	I1101 09:34:16.483791 2518640 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:34:16.487158 2518640 addons.go:515] duration metric: took 9.480071835s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:34:16.487246 2518640 start.go:247] waiting for cluster config update ...
	I1101 09:34:16.487275 2518640 start.go:256] writing updated cluster config ...
	I1101 09:34:16.487568 2518640 ssh_runner.go:195] Run: rm -f paused
	I1101 09:34:16.584992 2518640 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:34:16.588448 2518640 out.go:179] * Done! kubectl is now configured to use "newest-cni-124713" cluster and "default" namespace by default
	W1101 09:34:14.918030 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:17.417493 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.192383429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.199944778Z" level=info msg="Running pod sandbox: kube-system/kindnet-4szq6/POD" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.200185272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.238324505Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.243628752Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f464aab4-ef48-4b36-8504-2b6cb04c7e20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.260385817Z" level=info msg="Ran pod sandbox 1e7ac5f58085a996fb4b17f7d8af05a704861b59fcfdbcee844ea290fa4d547d with infra container: kube-system/kindnet-4szq6/POD" id=a27cf331-7cc9-423f-86c0-8966b7e07d5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.263016095Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62891047-bdb7-4bee-9f77-4f08c4b64728 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.264579572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8aed091c-b8a7-4261-8c8d-090b8854a915 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.26766418Z" level=info msg="Creating container: kube-system/kindnet-4szq6/kindnet-cni" id=56a84612-abd0-47a5-b9cc-1b90b518dcf1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.268176748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.283025312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.283600008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.286212769Z" level=info msg="Ran pod sandbox 74f10e9bc1fe7dcd3e8afb5fadaa740fa2a49fe3daa566f5b0e206f4a60cf259 with infra container: kube-system/kube-proxy-b69rf/POD" id=f464aab4-ef48-4b36-8504-2b6cb04c7e20 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.289024408Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a4bd9bc0-b28d-46be-8709-a317c9f43028 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.293447883Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b40a9098-9f75-447b-9d4c-a0ca216f68cb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.296235686Z" level=info msg="Creating container: kube-system/kube-proxy-b69rf/kube-proxy" id=16ff0295-3372-44e3-83c2-2f68a7486787 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.296823543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.317408539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.352408225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.452901486Z" level=info msg="Created container 7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e: kube-system/kindnet-4szq6/kindnet-cni" id=56a84612-abd0-47a5-b9cc-1b90b518dcf1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.455202165Z" level=info msg="Starting container: 7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e" id=4cdec3dd-a119-4df2-8956-b52e624d5659 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.45982488Z" level=info msg="Created container 1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b: kube-system/kube-proxy-b69rf/kube-proxy" id=16ff0295-3372-44e3-83c2-2f68a7486787 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.461860696Z" level=info msg="Starting container: 1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b" id=a348768e-50c3-4c66-a44e-6ec0b5f5ffa6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.463130396Z" level=info msg="Started container" PID=1057 containerID=7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e description=kube-system/kindnet-4szq6/kindnet-cni id=4cdec3dd-a119-4df2-8956-b52e624d5659 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e7ac5f58085a996fb4b17f7d8af05a704861b59fcfdbcee844ea290fa4d547d
	Nov 01 09:34:15 newest-cni-124713 crio[609]: time="2025-11-01T09:34:15.467684723Z" level=info msg="Started container" PID=1056 containerID=1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b description=kube-system/kube-proxy-b69rf/kube-proxy id=a348768e-50c3-4c66-a44e-6ec0b5f5ffa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74f10e9bc1fe7dcd3e8afb5fadaa740fa2a49fe3daa566f5b0e206f4a60cf259
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1c56f5f7c7738       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   74f10e9bc1fe7       kube-proxy-b69rf                            kube-system
	7aca1deb842a0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   1e7ac5f58085a       kindnet-4szq6                               kube-system
	a8cd9348cd8f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            1                   dbc31cc58335e       kube-scheduler-newest-cni-124713            kube-system
	0055fe7d149aa       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            1                   0ce602ef41e5d       kube-apiserver-newest-cni-124713            kube-system
	eafecd62f9287       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   1                   a51ffdec24d3d       kube-controller-manager-newest-cni-124713   kube-system
	87351c6e0eb29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      1                   3d95bff35f254       etcd-newest-cni-124713                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-124713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-124713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-124713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:33:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-124713
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:34:14 +0000   Sat, 01 Nov 2025 09:33:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-124713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                640263e5-c6bc-4077-95b9-66d3ed0270b1
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-124713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-4szq6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-124713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-124713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-b69rf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-124713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 32s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-124713 event: Registered Node newest-cni-124713 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-124713 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-124713 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-124713 event: Registered Node newest-cni-124713 in Controller
	
	
	==> dmesg <==
	[  +7.992192] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	[ +18.806441] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [87351c6e0eb295b8873ea951caef05b3e21649cfc05ef5547ec27729a4256b5c] <==
	{"level":"warn","ts":"2025-11-01T09:34:11.263538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.350454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.371528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.413301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.447275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.482997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.513914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.569505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.616421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.670563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.766480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.933816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:11.947385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.026469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.058850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.098655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.116125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.160032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.180019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.221135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.247258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.274681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.300423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.311806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:12.392376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:34:24 up 18:16,  0 user,  load average: 8.10, 4.59, 3.45
	Linux newest-cni-124713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7aca1deb842a0ab6957d0e536d9d48ac15390ed42079c675c2b327622f9c757e] <==
	I1101 09:34:15.584954       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:34:15.664297       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:34:15.664460       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:34:15.664506       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:34:15.664546       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:34:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:34:15.775082       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:34:15.775136       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:34:15.775146       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:34:15.776194       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0055fe7d149aaac0d9114c4fce265f810e5bcbdf0bac632b3be71cba9a166106] <==
	I1101 09:34:14.131048       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:34:14.133617       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:34:14.147285       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:34:14.154379       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:34:14.161702       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:34:14.162040       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:34:14.162051       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:34:14.162161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:34:14.180410       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:34:14.189389       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:34:14.203509       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:34:14.246549       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:34:14.268525       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:34:14.268816       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:34:14.356637       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:34:14.974349       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:34:15.902201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:34:15.995789       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:34:16.091891       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:34:16.142131       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:34:16.314053       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.146.182"}
	I1101 09:34:16.346125       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.79.13"}
	I1101 09:34:17.462623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:34:17.622037       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:34:17.659137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [eafecd62f9287b62383e0e205bf02befe8b40647500ac451c7a98c4b9d33b883] <==
	I1101 09:34:17.262357       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:34:17.262663       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:34:17.262726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:34:17.262880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:34:17.262948       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:34:17.262998       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:34:17.265538       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:34:17.266043       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:34:17.266268       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:34:17.269967       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:34:17.270028       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:34:17.270076       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:34:17.270441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:34:17.271072       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:34:17.271205       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:34:17.289221       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:34:17.317614       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:34:17.319256       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-124713"
	I1101 09:34:17.319376       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:34:17.319577       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:34:17.321504       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:34:17.365697       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:17.378665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:17.378749       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:34:17.378805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1c56f5f7c7738910e9a4ea8cae5cbe061677d614b47ca42da218c937131d6f7b] <==
	I1101 09:34:15.975394       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:34:16.266667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:34:16.368766       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:34:16.368799       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:34:16.377079       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:34:16.474920       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:34:16.475964       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:34:16.492127       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:34:16.493313       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:34:16.493374       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:16.524890       1 config.go:200] "Starting service config controller"
	I1101 09:34:16.524973       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:34:16.525016       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:34:16.525043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:34:16.525095       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:34:16.525123       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:34:16.529023       1 config.go:309] "Starting node config controller"
	I1101 09:34:16.529120       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:34:16.529152       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:34:16.625907       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:34:16.625943       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:34:16.626010       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a8cd9348cd8f6c0612fd1783c7194b83214ace4b8f1e42197ad6df9e56662e12] <==
	I1101 09:34:09.824553       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:34:15.559338       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:34:15.559368       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:15.570767       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:34:15.570801       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:34:15.570831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:15.570838       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:15.570860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.570866       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.583955       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:34:15.585468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:34:15.674263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:34:15.674351       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:34:15.674456       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:34:11 newest-cni-124713 kubelet[723]: E1101 09:34:11.285674     723 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-124713\" not found" node="newest-cni-124713"
	Nov 01 09:34:12 newest-cni-124713 kubelet[723]: E1101 09:34:12.469550     723 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-124713\" not found" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.033223     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.300046     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-124713\" already exists" pod="kube-system/etcd-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.300080     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.315789     723 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.315964     723 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.316006     723 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.317007     723 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.398269     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-124713\" already exists" pod="kube-system/kube-apiserver-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.398304     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.478622     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-124713\" already exists" pod="kube-system/kube-controller-manager-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.478667     723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: E1101 09:34:14.506386     723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-124713\" already exists" pod="kube-system/kube-scheduler-newest-cni-124713"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.868930     723 apiserver.go:52] "Watching apiserver"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.929902     723 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965236     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-cni-cfg\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965287     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-xtables-lock\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965309     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfa514f9-f59f-40fc-86c0-0005e842ee44-lib-modules\") pod \"kindnet-4szq6\" (UID: \"dfa514f9-f59f-40fc-86c0-0005e842ee44\") " pod="kube-system/kindnet-4szq6"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965336     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-lib-modules\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:34:14 newest-cni-124713 kubelet[723]: I1101 09:34:14.965379     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f001764-a3b7-4774-86b6-ab740da66ac4-xtables-lock\") pod \"kube-proxy-b69rf\" (UID: \"0f001764-a3b7-4774-86b6-ab740da66ac4\") " pod="kube-system/kube-proxy-b69rf"
	Nov 01 09:34:15 newest-cni-124713 kubelet[723]: I1101 09:34:15.030697     723 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:34:18 newest-cni-124713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124713 -n newest-cni-124713
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124713 -n newest-cni-124713: exit status 2 (387.622675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-124713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm: exit status 1 (89.319877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qkv9l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jxmg4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mqllm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-124713 describe pod coredns-66bc5c9577-qkv9l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jxmg4 kubernetes-dashboard-855c9754f9-mqllm: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-703627 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-703627 --alsologtostderr -v=1: exit status 80 (2.577451179s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-703627 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:59.290078 2524789 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:59.290237 2524789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:59.290250 2524789 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:59.290255 2524789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:59.290507 2524789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:34:59.290749 2524789 out.go:368] Setting JSON to false
	I1101 09:34:59.290771 2524789 mustload.go:66] Loading cluster: default-k8s-diff-port-703627
	I1101 09:34:59.291136 2524789 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:59.291584 2524789 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-703627 --format={{.State.Status}}
	I1101 09:34:59.329555 2524789 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:34:59.329870 2524789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:59.430957 2524789 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 09:34:59.416683549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:59.431583 2524789 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-703627 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:34:59.434978 2524789 out.go:179] * Pausing node default-k8s-diff-port-703627 ... 
	I1101 09:34:59.437835 2524789 host.go:66] Checking if "default-k8s-diff-port-703627" exists ...
	I1101 09:34:59.438186 2524789 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:59.438236 2524789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-703627
	I1101 09:34:59.471348 2524789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36375 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/default-k8s-diff-port-703627/id_rsa Username:docker}
	I1101 09:34:59.584742 2524789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:34:59.618129 2524789 pause.go:52] kubelet running: true
	I1101 09:34:59.618200 2524789 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:34:59.967207 2524789 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:34:59.967311 2524789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:35:00.271840 2524789 cri.go:89] found id: "785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833"
	I1101 09:35:00.271928 2524789 cri.go:89] found id: "2c75f0a8e43174ffcb23721d35794e30d0c951d79bbefa0776e5d7225c6a6443"
	I1101 09:35:00.271949 2524789 cri.go:89] found id: "e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5"
	I1101 09:35:00.271969 2524789 cri.go:89] found id: "906fe23ff42d46025170a70959cf630e42fc9c5c8900d890108c863e5308c3a1"
	I1101 09:35:00.271988 2524789 cri.go:89] found id: "eb67b5d7cf8442d6e208955bcc3c7672c8626771d4a76dbef50244c7fd76ddb5"
	I1101 09:35:00.272029 2524789 cri.go:89] found id: "988bd3df894076818e904c7d20f94d20da1787b44cb9aa57fbf416feb32b2c15"
	I1101 09:35:00.272046 2524789 cri.go:89] found id: "ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c"
	I1101 09:35:00.272066 2524789 cri.go:89] found id: "da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9"
	I1101 09:35:00.272085 2524789 cri.go:89] found id: "ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2"
	I1101 09:35:00.272119 2524789 cri.go:89] found id: "c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b"
	I1101 09:35:00.272142 2524789 cri.go:89] found id: "eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	I1101 09:35:00.272162 2524789 cri.go:89] found id: "cbda83eea242fce4b409534daa04c22b9a0d561f0566989379c73d1d837b7244"
	I1101 09:35:00.272181 2524789 cri.go:89] found id: ""
	I1101 09:35:00.272268 2524789 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:35:00.333998 2524789 retry.go:31] will retry after 302.594799ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:00Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:35:00.637323 2524789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:35:00.655318 2524789 pause.go:52] kubelet running: false
	I1101 09:35:00.655416 2524789 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:35:00.914495 2524789 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:35:00.914618 2524789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:35:01.030673 2524789 cri.go:89] found id: "785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833"
	I1101 09:35:01.030746 2524789 cri.go:89] found id: "2c75f0a8e43174ffcb23721d35794e30d0c951d79bbefa0776e5d7225c6a6443"
	I1101 09:35:01.030765 2524789 cri.go:89] found id: "e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5"
	I1101 09:35:01.030781 2524789 cri.go:89] found id: "906fe23ff42d46025170a70959cf630e42fc9c5c8900d890108c863e5308c3a1"
	I1101 09:35:01.030810 2524789 cri.go:89] found id: "eb67b5d7cf8442d6e208955bcc3c7672c8626771d4a76dbef50244c7fd76ddb5"
	I1101 09:35:01.030831 2524789 cri.go:89] found id: "988bd3df894076818e904c7d20f94d20da1787b44cb9aa57fbf416feb32b2c15"
	I1101 09:35:01.030846 2524789 cri.go:89] found id: "ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c"
	I1101 09:35:01.030864 2524789 cri.go:89] found id: "da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9"
	I1101 09:35:01.030880 2524789 cri.go:89] found id: "ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2"
	I1101 09:35:01.030910 2524789 cri.go:89] found id: "c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b"
	I1101 09:35:01.030931 2524789 cri.go:89] found id: "eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	I1101 09:35:01.030945 2524789 cri.go:89] found id: "cbda83eea242fce4b409534daa04c22b9a0d561f0566989379c73d1d837b7244"
	I1101 09:35:01.030973 2524789 cri.go:89] found id: ""
	I1101 09:35:01.031054 2524789 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:35:01.047923 2524789 retry.go:31] will retry after 334.782544ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:01Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:35:01.383356 2524789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:35:01.400760 2524789 pause.go:52] kubelet running: false
	I1101 09:35:01.400836 2524789 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:35:01.642936 2524789 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:35:01.643070 2524789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:35:01.738959 2524789 cri.go:89] found id: "785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833"
	I1101 09:35:01.739034 2524789 cri.go:89] found id: "2c75f0a8e43174ffcb23721d35794e30d0c951d79bbefa0776e5d7225c6a6443"
	I1101 09:35:01.739057 2524789 cri.go:89] found id: "e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5"
	I1101 09:35:01.739076 2524789 cri.go:89] found id: "906fe23ff42d46025170a70959cf630e42fc9c5c8900d890108c863e5308c3a1"
	I1101 09:35:01.739109 2524789 cri.go:89] found id: "eb67b5d7cf8442d6e208955bcc3c7672c8626771d4a76dbef50244c7fd76ddb5"
	I1101 09:35:01.739134 2524789 cri.go:89] found id: "988bd3df894076818e904c7d20f94d20da1787b44cb9aa57fbf416feb32b2c15"
	I1101 09:35:01.739152 2524789 cri.go:89] found id: "ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c"
	I1101 09:35:01.739171 2524789 cri.go:89] found id: "da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9"
	I1101 09:35:01.739187 2524789 cri.go:89] found id: "ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2"
	I1101 09:35:01.739218 2524789 cri.go:89] found id: "c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b"
	I1101 09:35:01.739242 2524789 cri.go:89] found id: "eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	I1101 09:35:01.739262 2524789 cri.go:89] found id: "cbda83eea242fce4b409534daa04c22b9a0d561f0566989379c73d1d837b7244"
	I1101 09:35:01.739281 2524789 cri.go:89] found id: ""
	I1101 09:35:01.739392 2524789 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:35:01.755940 2524789 out.go:203] 
	W1101 09:35:01.758910 2524789 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:35:01.758934 2524789 out.go:285] * 
	* 
	W1101 09:35:01.776239 2524789 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:35:01.781228 2524789 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-703627 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-703627
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-703627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	        "Created": "2025-11-01T09:32:04.900915027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2516618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:33:49.028800033Z",
	            "FinishedAt": "2025-11-01T09:33:48.1242607Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hosts",
	        "LogPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e-json.log",
	        "Name": "/default-k8s-diff-port-703627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-703627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-703627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	                "LowerDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-703627",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-703627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-703627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39a4c8a957e126c975ee18a383bf143b0d1b5be8694aeb07925364177018ffac",
	            "SandboxKey": "/var/run/docker/netns/39a4c8a957e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36375"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36376"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36377"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36378"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-703627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:15:25:c2:a8:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f92e55acde037535672c6bdfac6afcfec87a27f01e6451819c4f246fbcbac0db",
	                    "EndpointID": "34e1a1419bb65e3dcbc6e8e613125117d93b32ba97c71329d4acfd28e835ecc8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-703627",
	                        "a747d7437780"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627: exit status 2 (467.576632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25: (1.790500178s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-703627 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p newest-cni-124713 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-124713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ image   │ newest-cni-124713 image list --format=json                                                                                                                                                                                                    │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p newest-cni-124713 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	│ delete  │ -p newest-cni-124713                                                                                                                                                                                                                          │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ delete  │ -p newest-cni-124713                                                                                                                                                                                                                          │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ start   │ -p auto-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-206273                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	│ image   │ default-k8s-diff-port-703627 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p default-k8s-diff-port-703627 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:34:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:34:27.623723 2522602 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:27.623955 2522602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:27.623985 2522602 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:27.624005 2522602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:27.624404 2522602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:34:27.624987 2522602 out.go:368] Setting JSON to false
	I1101 09:34:27.628184 2522602 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65814,"bootTime":1761923854,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:34:27.628255 2522602 start.go:143] virtualization:  
	I1101 09:34:27.632265 2522602 out.go:179] * [auto-206273] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:34:27.636521 2522602 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:34:27.636620 2522602 notify.go:221] Checking for updates...
	I1101 09:34:27.642883 2522602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:34:27.646046 2522602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:27.649140 2522602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:34:27.652205 2522602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:34:27.655124 2522602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:34:27.658764 2522602 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:27.658950 2522602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:34:27.684835 2522602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:34:27.684973 2522602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:27.768467 2522602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:34:27.757664616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:27.768578 2522602 docker.go:319] overlay module found
	I1101 09:34:27.773744 2522602 out.go:179] * Using the docker driver based on user configuration
	I1101 09:34:27.776611 2522602 start.go:309] selected driver: docker
	I1101 09:34:27.776633 2522602 start.go:930] validating driver "docker" against <nil>
	I1101 09:34:27.776648 2522602 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:34:27.777392 2522602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:27.833998 2522602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:34:27.82513335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:27.834235 2522602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:34:27.834490 2522602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:34:27.837433 2522602 out.go:179] * Using Docker driver with root privileges
	I1101 09:34:27.840267 2522602 cni.go:84] Creating CNI manager for ""
	I1101 09:34:27.840333 2522602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:27.840345 2522602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:34:27.840430 2522602 start.go:353] cluster config:
	{Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 09:34:27.843535 2522602 out.go:179] * Starting "auto-206273" primary control-plane node in "auto-206273" cluster
	I1101 09:34:27.846298 2522602 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:34:27.849143 2522602 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:34:27.851974 2522602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:34:27.852050 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:27.852104 2522602 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:34:27.852119 2522602 cache.go:59] Caching tarball of preloaded images
	I1101 09:34:27.852215 2522602 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:34:27.852225 2522602 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:34:27.852329 2522602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json ...
	I1101 09:34:27.852346 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json: {Name:mk7832f26eaceb1a8696643732700482a4c58633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:27.870732 2522602 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:34:27.870755 2522602 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:34:27.870769 2522602 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:34:27.870805 2522602 start.go:360] acquireMachinesLock for auto-206273: {Name:mkb13d9460e4862596764be16bc3911d20f0d574 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:34:27.870914 2522602 start.go:364] duration metric: took 88.539µs to acquireMachinesLock for "auto-206273"
	I1101 09:34:27.870986 2522602 start.go:93] Provisioning new machine with config: &{Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:34:27.871070 2522602 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:34:23.920150 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:26.415960 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:27.874451 2522602 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:34:27.874709 2522602 start.go:159] libmachine.API.Create for "auto-206273" (driver="docker")
	I1101 09:34:27.874748 2522602 client.go:173] LocalClient.Create starting
	I1101 09:34:27.874818 2522602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:34:27.874855 2522602 main.go:143] libmachine: Decoding PEM data...
	I1101 09:34:27.874877 2522602 main.go:143] libmachine: Parsing certificate...
	I1101 09:34:27.874933 2522602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:34:27.874956 2522602 main.go:143] libmachine: Decoding PEM data...
	I1101 09:34:27.874969 2522602 main.go:143] libmachine: Parsing certificate...
	I1101 09:34:27.875315 2522602 cli_runner.go:164] Run: docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:34:27.890466 2522602 cli_runner.go:211] docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:34:27.890541 2522602 network_create.go:284] running [docker network inspect auto-206273] to gather additional debugging logs...
	I1101 09:34:27.890558 2522602 cli_runner.go:164] Run: docker network inspect auto-206273
	W1101 09:34:27.906019 2522602 cli_runner.go:211] docker network inspect auto-206273 returned with exit code 1
	I1101 09:34:27.906057 2522602 network_create.go:287] error running [docker network inspect auto-206273]: docker network inspect auto-206273: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-206273 not found
	I1101 09:34:27.906074 2522602 network_create.go:289] output of [docker network inspect auto-206273]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-206273 not found
	
	** /stderr **
	I1101 09:34:27.906188 2522602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:27.925152 2522602 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:34:27.925497 2522602 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:34:27.925851 2522602 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:34:27.926298 2522602 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001963b00}
	I1101 09:34:27.926319 2522602 network_create.go:124] attempt to create docker network auto-206273 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:34:27.926387 2522602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-206273 auto-206273
	I1101 09:34:27.992936 2522602 network_create.go:108] docker network auto-206273 192.168.76.0/24 created
	I1101 09:34:27.992970 2522602 kic.go:121] calculated static IP "192.168.76.2" for the "auto-206273" container
	I1101 09:34:27.993061 2522602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:34:28.013448 2522602 cli_runner.go:164] Run: docker volume create auto-206273 --label name.minikube.sigs.k8s.io=auto-206273 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:34:28.035492 2522602 oci.go:103] Successfully created a docker volume auto-206273
	I1101 09:34:28.035604 2522602 cli_runner.go:164] Run: docker run --rm --name auto-206273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-206273 --entrypoint /usr/bin/test -v auto-206273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:34:28.536628 2522602 oci.go:107] Successfully prepared a docker volume auto-206273
	I1101 09:34:28.536674 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:28.536694 2522602 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:34:28.536769 2522602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-206273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:34:28.915468 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:31.414306 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:33.415911 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:32.821087 2522602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-206273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284280898s)
	I1101 09:34:32.821117 2522602 kic.go:203] duration metric: took 4.284419815s to extract preloaded images to volume ...
	W1101 09:34:32.821255 2522602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:34:32.821366 2522602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:34:32.884333 2522602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-206273 --name auto-206273 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-206273 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-206273 --network auto-206273 --ip 192.168.76.2 --volume auto-206273:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:34:33.181137 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Running}}
	I1101 09:34:33.199786 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.225010 2522602 cli_runner.go:164] Run: docker exec auto-206273 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:34:33.288783 2522602 oci.go:144] the created container "auto-206273" has a running status.
	I1101 09:34:33.288808 2522602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa...
	I1101 09:34:33.783443 2522602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:34:33.802842 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.821129 2522602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:34:33.821153 2522602 kic_runner.go:114] Args: [docker exec --privileged auto-206273 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:34:33.859419 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.876371 2522602 machine.go:94] provisionDockerMachine start ...
	I1101 09:34:33.876471 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:33.892713 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:33.893056 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:33.893071 2522602 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:34:33.893658 2522602 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52822->127.0.0.1:36385: read: connection reset by peer
	I1101 09:34:37.051350 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-206273
	
	I1101 09:34:37.051376 2522602 ubuntu.go:182] provisioning hostname "auto-206273"
	I1101 09:34:37.051459 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.068963 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.069279 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.069297 2522602 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-206273 && echo "auto-206273" | sudo tee /etc/hostname
	I1101 09:34:37.230843 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-206273
	
	I1101 09:34:37.230933 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.249257 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.249573 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.249595 2522602 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-206273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-206273/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-206273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:34:37.404028 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:34:37.404054 2522602 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:34:37.404077 2522602 ubuntu.go:190] setting up certificates
	I1101 09:34:37.404086 2522602 provision.go:84] configureAuth start
	I1101 09:34:37.404147 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:37.422954 2522602 provision.go:143] copyHostCerts
	I1101 09:34:37.423013 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:34:37.423021 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:34:37.423101 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:34:37.423197 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:34:37.423203 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:34:37.423227 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:34:37.423277 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:34:37.423282 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:34:37.423304 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:34:37.423351 2522602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.auto-206273 san=[127.0.0.1 192.168.76.2 auto-206273 localhost minikube]
	I1101 09:34:37.593800 2522602 provision.go:177] copyRemoteCerts
	I1101 09:34:37.593871 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:34:37.593944 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.613517 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	W1101 09:34:35.917808 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:38.415752 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:37.720849 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:34:37.740531 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 09:34:37.758769 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:34:37.777250 2522602 provision.go:87] duration metric: took 373.141197ms to configureAuth
	I1101 09:34:37.777278 2522602 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:34:37.777461 2522602 config.go:182] Loaded profile config "auto-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:37.777587 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.795174 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.795479 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.795499 2522602 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:34:38.188339 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:34:38.188361 2522602 machine.go:97] duration metric: took 4.311966989s to provisionDockerMachine
	I1101 09:34:38.188372 2522602 client.go:176] duration metric: took 10.313614048s to LocalClient.Create
	I1101 09:34:38.188396 2522602 start.go:167] duration metric: took 10.313680507s to libmachine.API.Create "auto-206273"
	I1101 09:34:38.188404 2522602 start.go:293] postStartSetup for "auto-206273" (driver="docker")
	I1101 09:34:38.188414 2522602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:34:38.188488 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:34:38.188531 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.207555 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.311919 2522602 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:34:38.315328 2522602 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:34:38.315358 2522602 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:34:38.315370 2522602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:34:38.315424 2522602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:34:38.315513 2522602 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:34:38.315621 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:34:38.323569 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:38.343007 2522602 start.go:296] duration metric: took 154.586481ms for postStartSetup
	I1101 09:34:38.343400 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:38.361431 2522602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json ...
	I1101 09:34:38.361723 2522602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:34:38.361787 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.381598 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.485367 2522602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:34:38.490503 2522602 start.go:128] duration metric: took 10.619418512s to createHost
	I1101 09:34:38.490527 2522602 start.go:83] releasing machines lock for "auto-206273", held for 10.619560432s
	I1101 09:34:38.490614 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:38.508371 2522602 ssh_runner.go:195] Run: cat /version.json
	I1101 09:34:38.508435 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.508486 2522602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:34:38.508536 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.527743 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.547993 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.745886 2522602 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:38.752284 2522602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:34:38.792707 2522602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:34:38.797060 2522602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:34:38.797177 2522602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:34:38.828098 2522602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:34:38.828120 2522602 start.go:496] detecting cgroup driver to use...
	I1101 09:34:38.828152 2522602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:34:38.828199 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:34:38.845716 2522602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:34:38.857978 2522602 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:34:38.858038 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:34:38.877093 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:34:38.895977 2522602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:34:39.030710 2522602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:34:39.148897 2522602 docker.go:234] disabling docker service ...
	I1101 09:34:39.149036 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:34:39.168912 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:34:39.188898 2522602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:34:39.312013 2522602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:34:39.425442 2522602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:34:39.438842 2522602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:34:39.452639 2522602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:34:39.452772 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.462081 2522602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:34:39.462187 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.472038 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.481682 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.490477 2522602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:34:39.498476 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.507287 2522602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.521868 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.530684 2522602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:34:39.539125 2522602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:34:39.546660 2522602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:39.671221 2522602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:34:39.796253 2522602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:34:39.796320 2522602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:34:39.799925 2522602 start.go:564] Will wait 60s for crictl version
	I1101 09:34:39.800033 2522602 ssh_runner.go:195] Run: which crictl
	I1101 09:34:39.803602 2522602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:34:39.827707 2522602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:34:39.827907 2522602 ssh_runner.go:195] Run: crio --version
	I1101 09:34:39.860895 2522602 ssh_runner.go:195] Run: crio --version
	I1101 09:34:39.897757 2522602 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:34:39.900561 2522602 cli_runner.go:164] Run: docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:39.919507 2522602 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:34:39.923339 2522602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:39.932890 2522602 kubeadm.go:884] updating cluster {Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:34:39.933018 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:39.933090 2522602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:39.976632 2522602 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:39.976656 2522602 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:34:39.976718 2522602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:40.010480 2522602 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:40.010522 2522602 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:34:40.010531 2522602 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:34:40.010645 2522602 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-206273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:34:40.010742 2522602 ssh_runner.go:195] Run: crio config
	I1101 09:34:40.088083 2522602 cni.go:84] Creating CNI manager for ""
	I1101 09:34:40.088177 2522602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:40.088211 2522602 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:34:40.088288 2522602 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-206273 NodeName:auto-206273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:34:40.088544 2522602 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-206273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:34:40.088625 2522602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:34:40.099069 2522602 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:34:40.099234 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:34:40.108514 2522602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 09:34:40.123739 2522602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:34:40.139820 2522602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 09:34:40.155973 2522602 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:34:40.160760 2522602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:40.172474 2522602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:40.289907 2522602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:40.306114 2522602 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273 for IP: 192.168.76.2
	I1101 09:34:40.306181 2522602 certs.go:195] generating shared ca certs ...
	I1101 09:34:40.306212 2522602 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.306382 2522602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:34:40.306481 2522602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:34:40.306523 2522602 certs.go:257] generating profile certs ...
	I1101 09:34:40.306608 2522602 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key
	I1101 09:34:40.306644 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt with IP's: []
	I1101 09:34:40.509440 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt ...
	I1101 09:34:40.509476 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: {Name:mk0c96fc6b8c470a3ea45179be6f0a05103ee16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.509676 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key ...
	I1101 09:34:40.509690 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key: {Name:mkd1baf0391d0be8a232910fe37fad1371a4d9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.509785 2522602 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3
	I1101 09:34:40.509800 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:34:41.184915 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 ...
	I1101 09:34:41.184991 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3: {Name:mkee41d1cdc103c88e254aee3ec97a81010cd954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.185243 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3 ...
	I1101 09:34:41.185281 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3: {Name:mkad47db866d7fecff27883ca9f83abbb90a10c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.185414 2522602 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt
	I1101 09:34:41.185549 2522602 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key
	I1101 09:34:41.185682 2522602 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key
	I1101 09:34:41.185719 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt with IP's: []
	I1101 09:34:41.427229 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt ...
	I1101 09:34:41.427307 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt: {Name:mk62ef67249c7552cafb9e29e758c64ba1010fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.427549 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key ...
	I1101 09:34:41.427588 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key: {Name:mkb3112ca58d6d49a15bdde2b60e8ce44509c1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.427836 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:34:41.427919 2522602 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:34:41.427946 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:34:41.428010 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:34:41.428066 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:34:41.428115 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:34:41.428197 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:41.428839 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:34:41.450362 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:34:41.472934 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:34:41.492430 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:34:41.512146 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 09:34:41.534638 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:34:41.557076 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:34:41.576866 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:34:41.595888 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:34:41.615047 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:34:41.635170 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:34:41.653097 2522602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:34:41.667215 2522602 ssh_runner.go:195] Run: openssl version
	I1101 09:34:41.677814 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:34:41.687948 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.691478 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.691575 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.737594 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:34:41.747793 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:34:41.757872 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.770149 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.770216 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.816149 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:34:41.825784 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:34:41.833942 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.837655 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.837720 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.878993 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:34:41.887403 2522602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:34:41.890926 2522602 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:34:41.891022 2522602 kubeadm.go:401] StartCluster: {Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:34:41.891106 2522602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:41.891166 2522602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:41.921874 2522602 cri.go:89] found id: ""
	I1101 09:34:41.921961 2522602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:34:41.929780 2522602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:34:41.937668 2522602 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:34:41.937768 2522602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:34:41.946776 2522602 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:34:41.946842 2522602 kubeadm.go:158] found existing configuration files:
	
	I1101 09:34:41.946924 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:34:41.954889 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:34:41.954978 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:34:41.962394 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:34:41.970835 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:34:41.970951 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:34:41.978558 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:34:41.986452 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:34:41.986522 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:34:41.994738 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:34:42.002760 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:34:42.002850 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:34:42.019089 2522602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:34:42.068304 2522602 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:34:42.068449 2522602 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:34:42.101224 2522602 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:34:42.101362 2522602 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:34:42.101409 2522602 kubeadm.go:319] OS: Linux
	I1101 09:34:42.101476 2522602 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:34:42.101549 2522602 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:34:42.101624 2522602 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:34:42.101690 2522602 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:34:42.101757 2522602 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:34:42.101832 2522602 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:34:42.101914 2522602 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:34:42.101981 2522602 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:34:42.102045 2522602 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:34:42.209927 2522602 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:34:42.210057 2522602 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:34:42.210185 2522602 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:34:42.232195 2522602 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:34:42.237983 2522602 out.go:252]   - Generating certificates and keys ...
	I1101 09:34:42.238453 2522602 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:34:42.240085 2522602 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1101 09:34:40.916427 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:43.420281 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:44.414354 2516487 pod_ready.go:94] pod "coredns-66bc5c9577-7hh2n" is "Ready"
	I1101 09:34:44.414436 2516487 pod_ready.go:86] duration metric: took 36.505246555s for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.414463 2516487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.418819 2516487 pod_ready.go:94] pod "coredns-66bc5c9577-mbmf5" is "Ready"
	I1101 09:34:44.418843 2516487 pod_ready.go:86] duration metric: took 4.367009ms for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.421386 2516487 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.447348 2516487 pod_ready.go:94] pod "etcd-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:44.447373 2516487 pod_ready.go:86] duration metric: took 25.966172ms for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.469041 2516487 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.485793 2516487 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:44.485860 2516487 pod_ready.go:86] duration metric: took 16.795111ms for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.613245 2516487 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.014461 2516487 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:45.014545 2516487 pod_ready.go:86] duration metric: took 401.227475ms for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.213946 2516487 pod_ready.go:83] waiting for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.612870 2516487 pod_ready.go:94] pod "kube-proxy-6lwj9" is "Ready"
	I1101 09:34:45.612897 2516487 pod_ready.go:86] duration metric: took 398.861821ms for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.813593 2516487 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:46.212610 2516487 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:46.212640 2516487 pod_ready.go:86] duration metric: took 398.97774ms for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:46.212656 2516487 pod_ready.go:40] duration metric: took 38.325218844s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:34:46.292672 2516487 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:34:46.296063 2516487 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-703627" cluster and "default" namespace by default
	I1101 09:34:42.779273 2522602 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:34:43.104822 2522602 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:34:43.729182 2522602 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:34:43.970315 2522602 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:34:45.326546 2522602 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:34:45.326936 2522602 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-206273 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:34:45.462228 2522602 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:34:45.462640 2522602 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-206273 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:34:46.066217 2522602 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:34:47.101405 2522602 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:34:47.253475 2522602 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:34:47.253774 2522602 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:34:47.579799 2522602 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:34:49.184211 2522602 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:34:50.469353 2522602 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:34:51.478449 2522602 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:34:51.590708 2522602 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:34:51.591314 2522602 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:34:51.593837 2522602 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:34:51.597479 2522602 out.go:252]   - Booting up control plane ...
	I1101 09:34:51.597575 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:34:51.597656 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:34:51.597725 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:34:51.614949 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:34:51.615057 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:34:51.622885 2522602 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:34:51.623216 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:34:51.623262 2522602 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:34:51.754176 2522602 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:34:51.754319 2522602 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:34:53.263134 2522602 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.509042493s
	I1101 09:34:53.267732 2522602 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:34:53.268199 2522602 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:34:53.272210 2522602 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:34:53.272762 2522602 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:34:58.384520 2522602 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.111676564s
	I1101 09:35:01.035644 2522602 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.762388824s
	I1101 09:35:01.776711 2522602 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502336567s
	I1101 09:35:01.818642 2522602 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:35:01.848332 2522602 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:35:01.879065 2522602 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:35:01.879268 2522602 kubeadm.go:319] [mark-control-plane] Marking the node auto-206273 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:35:01.908712 2522602 kubeadm.go:319] [bootstrap-token] Using token: wv5weu.3rzz14fjiv1o5v51
	
	
	==> CRI-O <==
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.370481759Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=60a11dcb-ec4e-45a3-ba52-6419b38452a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.371708703Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eea7837-83f5-4418-bef1-e0ffb5e2c96d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.372005737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.377886304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378263533Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4f925dfdba826b95e988bb09d59efcc9e00fe2f5313bfc9251a16e341791e1ac/merged/etc/passwd: no such file or directory"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378410401Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4f925dfdba826b95e988bb09d59efcc9e00fe2f5313bfc9251a16e341791e1ac/merged/etc/group: no such file or directory"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378792692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.394156516Z" level=info msg="Created container 785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833: kube-system/storage-provisioner/storage-provisioner" id=2eea7837-83f5-4418-bef1-e0ffb5e2c96d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.395111509Z" level=info msg="Starting container: 785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833" id=7a9812c1-3204-4f75-b98b-b81273ddd024 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.398421128Z" level=info msg="Started container" PID=1697 containerID=785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833 description=kube-system/storage-provisioner/storage-provisioner id=7a9812c1-3204-4f75-b98b-b81273ddd024 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71959e674b8ce9d3865629406ee5b011f0d306030dbd51998dfce9690b7131db
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.408446841Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420420244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420595304Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420667047Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432086636Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432251226Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432323758Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.438663906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.43881911Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.438890214Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456140082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456183379Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456203104Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.471671975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.471838214Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	785bd2a3eea28       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   71959e674b8ce       storage-provisioner                                    kube-system
	eaf8d298a127c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   42840e9bae406       dashboard-metrics-scraper-6ffb444bf9-kqqm9             kubernetes-dashboard
	cbda83eea242f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   4502c65a4ca63       kubernetes-dashboard-855c9754f9-l6cs4                  kubernetes-dashboard
	2c75f0a8e4317       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   9e5e08959386f       coredns-66bc5c9577-mbmf5                               kube-system
	890a087cbbfb6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   121038856e59c       busybox                                                default
	e3203b28a815b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   71959e674b8ce       storage-provisioner                                    kube-system
	906fe23ff42d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   076292d587710       kube-proxy-6lwj9                                       kube-system
	eb67b5d7cf844       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   1d540eb1d4ecf       kindnet-td2vz                                          kube-system
	988bd3df89407       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   e812339ce782a       coredns-66bc5c9577-7hh2n                               kube-system
	ee79a7fc9cfee       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   271ad71b8345f       kube-apiserver-default-k8s-diff-port-703627            kube-system
	da7e2f29a7555       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   1d9f882d85f59       kube-scheduler-default-k8s-diff-port-703627            kube-system
	ae10c649f560f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6ea9ad3e885bc       kube-controller-manager-default-k8s-diff-port-703627   kube-system
	c7d1cc29b1ea5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d8af357421432       etcd-default-k8s-diff-port-703627                      kube-system
	
	
	==> coredns [2c75f0a8e43174ffcb23721d35794e30d0c951d79bbefa0776e5d7225c6a6443] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53323 - 45120 "HINFO IN 3236742124570476460.4777910283780346032. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032977579s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [988bd3df894076818e904c7d20f94d20da1787b44cb9aa57fbf416feb32b2c15] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56599 - 54997 "HINFO IN 4034213257305694922.8264765600997235812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013129178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-703627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-703627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-703627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_32_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-703627
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:33:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-703627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                715daf08-52c6-47e9-9d22-22f4a756b35f
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-7hh2n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 coredns-66bc5c9577-mbmf5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-703627                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-td2vz                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-703627             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-703627    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-6lwj9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-703627             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kqqm9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l6cs4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-703627 event: Registered Node default-k8s-diff-port-703627 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-703627 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-703627 event: Registered Node default-k8s-diff-port-703627 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	[ +18.806441] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
	[ +47.017810] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b] <==
	{"level":"warn","ts":"2025-11-01T09:34:01.557668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.620022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.664494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.708153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.790441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.802018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.867988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.904974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.929108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.976016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.031631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.102544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.138594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.209931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.259056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.366234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.409539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.487948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.507940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.577942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.665281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.695718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.748387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.784648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:03.008604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:03 up 18:17,  0 user,  load average: 5.93, 4.42, 3.43
	Linux default-k8s-diff-port-703627 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb67b5d7cf8442d6e208955bcc3c7672c8626771d4a76dbef50244c7fd76ddb5] <==
	I1101 09:34:06.189847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:34:06.190163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:34:06.190429       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:34:06.190442       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:34:06.190452       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:34:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:34:06.407654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:34:06.407671       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:34:06.407681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:34:06.408363       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:34:36.408021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:34:36.408329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:34:36.408403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:34:36.408438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:34:38.008169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:34:38.008321       1 metrics.go:72] Registering metrics
	I1101 09:34:38.008427       1 controller.go:711] "Syncing nftables rules"
	I1101 09:34:46.408096       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:34:46.408156       1 main.go:301] handling current node
	I1101 09:34:56.415897       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:34:56.416015       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c] <==
	I1101 09:34:04.774282       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:34:04.774356       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:34:04.774363       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:34:04.775439       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:34:04.775643       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:34:04.779172       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:34:04.779187       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:34:04.779194       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:34:04.779201       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:34:04.788545       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:34:04.789100       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:34:04.789211       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:34:04.806665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:34:04.920243       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:34:04.949341       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:34:05.107538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:34:06.887254       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:34:06.964306       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:34:07.060256       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:34:07.103113       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:34:07.537157       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.27.62"}
	I1101 09:34:07.618903       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.174.231"}
	I1101 09:34:08.947344       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:34:09.111594       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:34:09.337881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2] <==
	I1101 09:34:08.876153       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:34:08.876179       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:34:08.874561       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:34:08.880172       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:34:08.880395       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:34:08.888141       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:34:08.889261       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:34:08.890567       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:34:08.903330       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:34:08.903627       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:34:08.903696       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:34:08.904133       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:34:08.903713       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:34:08.907044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:34:08.907126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:34:08.907138       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:34:08.907149       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:34:08.915917       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:34:08.916156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:34:08.919954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:08.932066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:08.932749       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:34:08.932805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:34:09.384750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1101 09:34:09.385478       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [906fe23ff42d46025170a70959cf630e42fc9c5c8900d890108c863e5308c3a1] <==
	I1101 09:34:06.901203       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:34:07.130449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:34:07.314905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:34:07.314938       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:34:07.315002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:34:07.575119       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:34:07.575175       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:34:07.643743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:34:07.645390       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:34:07.645411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:07.664099       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:34:07.664122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:34:07.664457       1 config.go:200] "Starting service config controller"
	I1101 09:34:07.664464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:34:07.664779       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:34:07.664786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:34:07.665236       1 config.go:309] "Starting node config controller"
	I1101 09:34:07.665244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:34:07.665249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:34:07.765362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:34:07.765600       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:34:07.765630       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9] <==
	I1101 09:34:02.615168       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:34:04.368811       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:34:04.368843       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:34:04.368852       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:34:04.368860       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:34:04.684889       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:34:04.684924       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:04.697087       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:34:04.697198       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:04.697216       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:04.697233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:34:04.813992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:34:09 default-k8s-diff-port-703627 kubelet[794]: W1101 09:34:09.706401     794 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7 WatchSource:0}: Error finding container 42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7: Status 404 returned error can't find the container with id 42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7
	Nov 01 09:34:09 default-k8s-diff-port-703627 kubelet[794]: W1101 09:34:09.744930     794 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f WatchSource:0}: Error finding container 4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f: Status 404 returned error can't find the container with id 4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f
	Nov 01 09:34:11 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:11.695822     794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:34:13 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:13.933380     794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:34:16 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:16.280890     794 scope.go:117] "RemoveContainer" containerID="0af332a14f15b900cd18824407fd56e417a1a172e067d26d44e0487129243413"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:17.301804     794 scope.go:117] "RemoveContainer" containerID="0af332a14f15b900cd18824407fd56e417a1a172e067d26d44e0487129243413"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:17.302184     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:17.302357     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:18 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:18.308282     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:18 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:18.308431     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:19 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:19.659062     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:19 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:19.659245     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:31 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:31.883744     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.355411     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.355696     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:32.355877     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.430088     794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l6cs4" podStartSLOduration=9.673156036 podStartE2EDuration="23.430068804s" podCreationTimestamp="2025-11-01 09:34:09 +0000 UTC" firstStartedPulling="2025-11-01 09:34:09.783685387 +0000 UTC m=+13.241797512" lastFinishedPulling="2025-11-01 09:34:23.540598155 +0000 UTC m=+26.998710280" observedRunningTime="2025-11-01 09:34:24.349663761 +0000 UTC m=+27.807775902" watchObservedRunningTime="2025-11-01 09:34:32.430068804 +0000 UTC m=+35.888180928"
	Nov 01 09:34:36 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:36.368684     794 scope.go:117] "RemoveContainer" containerID="e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5"
	Nov 01 09:34:39 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:39.659415     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:39 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:39.659600     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:50 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:50.888132     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:50 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:50.888305     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cbda83eea242fce4b409534daa04c22b9a0d561f0566989379c73d1d837b7244] <==
	2025/11/01 09:34:23 Using namespace: kubernetes-dashboard
	2025/11/01 09:34:23 Using in-cluster config to connect to apiserver
	2025/11/01 09:34:23 Using secret token for csrf signing
	2025/11/01 09:34:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:34:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:34:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:34:23 Generating JWE encryption key
	2025/11/01 09:34:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:34:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:34:24 Initializing JWE encryption key from synchronized object
	2025/11/01 09:34:24 Creating in-cluster Sidecar client
	2025/11/01 09:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:34:24 Serving insecurely on HTTP port: 9090
	2025/11/01 09:34:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:34:23 Starting overwatch
	
	
	==> storage-provisioner [785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833] <==
	I1101 09:34:36.434685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:34:36.434756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:34:36.437940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:39.893367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:44.154374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:47.754060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:50.808745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.831293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.839469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:34:53.839738       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:34:53.840360       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2b0c60a-1c26-4e31-8638-769a7831ea66", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16 became leader
	I1101 09:34:53.840498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16!
	W1101 09:34:53.851161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.854846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:34:53.941356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16!
	W1101 09:34:55.858384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:55.863102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:57.866492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:57.871028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:59.877599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:59.888645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:01.894606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:01.904079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:03.908124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:03.923005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5] <==
	I1101 09:34:06.068141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:34:36.070588       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627: exit status 2 (417.943482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-703627
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-703627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	        "Created": "2025-11-01T09:32:04.900915027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2516618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:33:49.028800033Z",
	            "FinishedAt": "2025-11-01T09:33:48.1242607Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/hosts",
	        "LogPath": "/var/lib/docker/containers/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e-json.log",
	        "Name": "/default-k8s-diff-port-703627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-703627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-703627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e",
	                "LowerDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2-init/diff:/var/lib/docker/overlay2/e248e2c4c8c52e2b41c7098e27a1e6d3433c7b0d01c47093073da500268c4b77/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b02ded618c44d4ffa151302b7a817601e39ffe2f362b1ddbc18b362601181ea2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-703627",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-703627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-703627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-703627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39a4c8a957e126c975ee18a383bf143b0d1b5be8694aeb07925364177018ffac",
	            "SandboxKey": "/var/run/docker/netns/39a4c8a957e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36375"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36376"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36377"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36378"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-703627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:15:25:c2:a8:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f92e55acde037535672c6bdfac6afcfec87a27f01e6451819c4f246fbcbac0db",
	                    "EndpointID": "34e1a1419bb65e3dcbc6e8e613125117d93b32ba97c71329d4acfd28e835ecc8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-703627",
	                        "a747d7437780"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627: exit status 2 (393.767201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-703627 logs -n 25: (1.512025811s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-357229 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p no-preload-357229                                                                                                                                                                                                                          │ no-preload-357229            │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ delete  │ -p disable-driver-mounts-054033                                                                                                                                                                                                               │ disable-driver-mounts-054033 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:33 UTC │
	│ image   │ embed-certs-312549 image list --format=json                                                                                                                                                                                                   │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ pause   │ -p embed-certs-312549 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ delete  │ -p embed-certs-312549                                                                                                                                                                                                                         │ embed-certs-312549           │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-703627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-703627 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-124713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │                     │
	│ stop    │ -p newest-cni-124713 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-124713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:33 UTC │
	│ start   │ -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:33 UTC │ 01 Nov 25 09:34 UTC │
	│ image   │ newest-cni-124713 image list --format=json                                                                                                                                                                                                    │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p newest-cni-124713 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	│ delete  │ -p newest-cni-124713                                                                                                                                                                                                                          │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ delete  │ -p newest-cni-124713                                                                                                                                                                                                                          │ newest-cni-124713            │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ start   │ -p auto-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-206273                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	│ image   │ default-k8s-diff-port-703627 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	│ pause   │ -p default-k8s-diff-port-703627 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-703627 │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:34:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:34:27.623723 2522602 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:27.623955 2522602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:27.623985 2522602 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:27.624005 2522602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:27.624404 2522602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:34:27.624987 2522602 out.go:368] Setting JSON to false
	I1101 09:34:27.628184 2522602 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65814,"bootTime":1761923854,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:34:27.628255 2522602 start.go:143] virtualization:  
	I1101 09:34:27.632265 2522602 out.go:179] * [auto-206273] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:34:27.636521 2522602 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:34:27.636620 2522602 notify.go:221] Checking for updates...
	I1101 09:34:27.642883 2522602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:34:27.646046 2522602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:34:27.649140 2522602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:34:27.652205 2522602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:34:27.655124 2522602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:34:27.658764 2522602 config.go:182] Loaded profile config "default-k8s-diff-port-703627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:27.658950 2522602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:34:27.684835 2522602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:34:27.684973 2522602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:27.768467 2522602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:34:27.757664616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:27.768578 2522602 docker.go:319] overlay module found
	I1101 09:34:27.773744 2522602 out.go:179] * Using the docker driver based on user configuration
	I1101 09:34:27.776611 2522602 start.go:309] selected driver: docker
	I1101 09:34:27.776633 2522602 start.go:930] validating driver "docker" against <nil>
	I1101 09:34:27.776648 2522602 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:34:27.777392 2522602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:34:27.833998 2522602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:34:27.82513335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:34:27.834235 2522602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:34:27.834490 2522602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:34:27.837433 2522602 out.go:179] * Using Docker driver with root privileges
	I1101 09:34:27.840267 2522602 cni.go:84] Creating CNI manager for ""
	I1101 09:34:27.840333 2522602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:27.840345 2522602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:34:27.840430 2522602 start.go:353] cluster config:
	{Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 09:34:27.843535 2522602 out.go:179] * Starting "auto-206273" primary control-plane node in "auto-206273" cluster
	I1101 09:34:27.846298 2522602 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:34:27.849143 2522602 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:34:27.851974 2522602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:34:27.852050 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:27.852104 2522602 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1101 09:34:27.852119 2522602 cache.go:59] Caching tarball of preloaded images
	I1101 09:34:27.852215 2522602 preload.go:233] Found /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 09:34:27.852225 2522602 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:34:27.852329 2522602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json ...
	I1101 09:34:27.852346 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json: {Name:mk7832f26eaceb1a8696643732700482a4c58633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:27.870732 2522602 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:34:27.870755 2522602 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:34:27.870769 2522602 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:34:27.870805 2522602 start.go:360] acquireMachinesLock for auto-206273: {Name:mkb13d9460e4862596764be16bc3911d20f0d574 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:34:27.870914 2522602 start.go:364] duration metric: took 88.539µs to acquireMachinesLock for "auto-206273"
	I1101 09:34:27.870986 2522602 start.go:93] Provisioning new machine with config: &{Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:34:27.871070 2522602 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:34:23.920150 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:26.415960 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:27.874451 2522602 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:34:27.874709 2522602 start.go:159] libmachine.API.Create for "auto-206273" (driver="docker")
	I1101 09:34:27.874748 2522602 client.go:173] LocalClient.Create starting
	I1101 09:34:27.874818 2522602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem
	I1101 09:34:27.874855 2522602 main.go:143] libmachine: Decoding PEM data...
	I1101 09:34:27.874877 2522602 main.go:143] libmachine: Parsing certificate...
	I1101 09:34:27.874933 2522602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem
	I1101 09:34:27.874956 2522602 main.go:143] libmachine: Decoding PEM data...
	I1101 09:34:27.874969 2522602 main.go:143] libmachine: Parsing certificate...
	I1101 09:34:27.875315 2522602 cli_runner.go:164] Run: docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:34:27.890466 2522602 cli_runner.go:211] docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:34:27.890541 2522602 network_create.go:284] running [docker network inspect auto-206273] to gather additional debugging logs...
	I1101 09:34:27.890558 2522602 cli_runner.go:164] Run: docker network inspect auto-206273
	W1101 09:34:27.906019 2522602 cli_runner.go:211] docker network inspect auto-206273 returned with exit code 1
	I1101 09:34:27.906057 2522602 network_create.go:287] error running [docker network inspect auto-206273]: docker network inspect auto-206273: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-206273 not found
	I1101 09:34:27.906074 2522602 network_create.go:289] output of [docker network inspect auto-206273]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-206273 not found
	
	** /stderr **
	I1101 09:34:27.906188 2522602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:27.925152 2522602 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
	I1101 09:34:27.925497 2522602 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5e2113ca68f6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:43:2d:73:9d:6f} reservation:<nil>}
	I1101 09:34:27.925851 2522602 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06825307e87a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:bb:6a:93:8e:bc} reservation:<nil>}
	I1101 09:34:27.926298 2522602 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001963b00}
	I1101 09:34:27.926319 2522602 network_create.go:124] attempt to create docker network auto-206273 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:34:27.926387 2522602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-206273 auto-206273
	I1101 09:34:27.992936 2522602 network_create.go:108] docker network auto-206273 192.168.76.0/24 created
	I1101 09:34:27.992970 2522602 kic.go:121] calculated static IP "192.168.76.2" for the "auto-206273" container
	I1101 09:34:27.993061 2522602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:34:28.013448 2522602 cli_runner.go:164] Run: docker volume create auto-206273 --label name.minikube.sigs.k8s.io=auto-206273 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:34:28.035492 2522602 oci.go:103] Successfully created a docker volume auto-206273
	I1101 09:34:28.035604 2522602 cli_runner.go:164] Run: docker run --rm --name auto-206273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-206273 --entrypoint /usr/bin/test -v auto-206273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:34:28.536628 2522602 oci.go:107] Successfully prepared a docker volume auto-206273
	I1101 09:34:28.536674 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:28.536694 2522602 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:34:28.536769 2522602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-206273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:34:28.915468 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:31.414306 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:33.415911 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:32.821087 2522602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-206273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284280898s)
	I1101 09:34:32.821117 2522602 kic.go:203] duration metric: took 4.284419815s to extract preloaded images to volume ...
	W1101 09:34:32.821255 2522602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 09:34:32.821366 2522602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:34:32.884333 2522602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-206273 --name auto-206273 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-206273 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-206273 --network auto-206273 --ip 192.168.76.2 --volume auto-206273:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:34:33.181137 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Running}}
	I1101 09:34:33.199786 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.225010 2522602 cli_runner.go:164] Run: docker exec auto-206273 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:34:33.288783 2522602 oci.go:144] the created container "auto-206273" has a running status.
	I1101 09:34:33.288808 2522602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa...
	I1101 09:34:33.783443 2522602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:34:33.802842 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.821129 2522602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:34:33.821153 2522602 kic_runner.go:114] Args: [docker exec --privileged auto-206273 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:34:33.859419 2522602 cli_runner.go:164] Run: docker container inspect auto-206273 --format={{.State.Status}}
	I1101 09:34:33.876371 2522602 machine.go:94] provisionDockerMachine start ...
	I1101 09:34:33.876471 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:33.892713 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:33.893056 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:33.893071 2522602 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:34:33.893658 2522602 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52822->127.0.0.1:36385: read: connection reset by peer
	I1101 09:34:37.051350 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-206273
	
	I1101 09:34:37.051376 2522602 ubuntu.go:182] provisioning hostname "auto-206273"
	I1101 09:34:37.051459 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.068963 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.069279 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.069297 2522602 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-206273 && echo "auto-206273" | sudo tee /etc/hostname
	I1101 09:34:37.230843 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-206273
	
	I1101 09:34:37.230933 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.249257 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.249573 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.249595 2522602 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-206273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-206273/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-206273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:34:37.404028 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:34:37.404054 2522602 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-2314135/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-2314135/.minikube}
	I1101 09:34:37.404077 2522602 ubuntu.go:190] setting up certificates
	I1101 09:34:37.404086 2522602 provision.go:84] configureAuth start
	I1101 09:34:37.404147 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:37.422954 2522602 provision.go:143] copyHostCerts
	I1101 09:34:37.423013 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem, removing ...
	I1101 09:34:37.423021 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem
	I1101 09:34:37.423101 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/cert.pem (1123 bytes)
	I1101 09:34:37.423197 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem, removing ...
	I1101 09:34:37.423203 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem
	I1101 09:34:37.423227 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/key.pem (1675 bytes)
	I1101 09:34:37.423277 2522602 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem, removing ...
	I1101 09:34:37.423282 2522602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem
	I1101 09:34:37.423304 2522602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.pem (1082 bytes)
	I1101 09:34:37.423351 2522602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem org=jenkins.auto-206273 san=[127.0.0.1 192.168.76.2 auto-206273 localhost minikube]
	I1101 09:34:37.593800 2522602 provision.go:177] copyRemoteCerts
	I1101 09:34:37.593871 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:34:37.593944 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.613517 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	W1101 09:34:35.917808 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:38.415752 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:37.720849 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:34:37.740531 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 09:34:37.758769 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:34:37.777250 2522602 provision.go:87] duration metric: took 373.141197ms to configureAuth
	I1101 09:34:37.777278 2522602 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:34:37.777461 2522602 config.go:182] Loaded profile config "auto-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:37.777587 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:37.795174 2522602 main.go:143] libmachine: Using SSH client type: native
	I1101 09:34:37.795479 2522602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 36385 <nil> <nil>}
	I1101 09:34:37.795499 2522602 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:34:38.188339 2522602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:34:38.188361 2522602 machine.go:97] duration metric: took 4.311966989s to provisionDockerMachine
	I1101 09:34:38.188372 2522602 client.go:176] duration metric: took 10.313614048s to LocalClient.Create
	I1101 09:34:38.188396 2522602 start.go:167] duration metric: took 10.313680507s to libmachine.API.Create "auto-206273"
	I1101 09:34:38.188404 2522602 start.go:293] postStartSetup for "auto-206273" (driver="docker")
	I1101 09:34:38.188414 2522602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:34:38.188488 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:34:38.188531 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.207555 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.311919 2522602 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:34:38.315328 2522602 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:34:38.315358 2522602 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:34:38.315370 2522602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/addons for local assets ...
	I1101 09:34:38.315424 2522602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-2314135/.minikube/files for local assets ...
	I1101 09:34:38.315513 2522602 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem -> 23159822.pem in /etc/ssl/certs
	I1101 09:34:38.315621 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:34:38.323569 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:38.343007 2522602 start.go:296] duration metric: took 154.586481ms for postStartSetup
	I1101 09:34:38.343400 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:38.361431 2522602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/config.json ...
	I1101 09:34:38.361723 2522602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:34:38.361787 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.381598 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.485367 2522602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:34:38.490503 2522602 start.go:128] duration metric: took 10.619418512s to createHost
	I1101 09:34:38.490527 2522602 start.go:83] releasing machines lock for "auto-206273", held for 10.619560432s
	I1101 09:34:38.490614 2522602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-206273
	I1101 09:34:38.508371 2522602 ssh_runner.go:195] Run: cat /version.json
	I1101 09:34:38.508435 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.508486 2522602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:34:38.508536 2522602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-206273
	I1101 09:34:38.527743 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.547993 2522602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36385 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/auto-206273/id_rsa Username:docker}
	I1101 09:34:38.745886 2522602 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:38.752284 2522602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:34:38.792707 2522602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:34:38.797060 2522602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:34:38.797177 2522602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:34:38.828098 2522602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 09:34:38.828120 2522602 start.go:496] detecting cgroup driver to use...
	I1101 09:34:38.828152 2522602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 09:34:38.828199 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:34:38.845716 2522602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:34:38.857978 2522602 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:34:38.858038 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:34:38.877093 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:34:38.895977 2522602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:34:39.030710 2522602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:34:39.148897 2522602 docker.go:234] disabling docker service ...
	I1101 09:34:39.149036 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:34:39.168912 2522602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:34:39.188898 2522602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:34:39.312013 2522602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:34:39.425442 2522602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:34:39.438842 2522602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:34:39.452639 2522602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:34:39.452772 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.462081 2522602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:34:39.462187 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.472038 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.481682 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.490477 2522602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:34:39.498476 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.507287 2522602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.521868 2522602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:34:39.530684 2522602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:34:39.539125 2522602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:34:39.546660 2522602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:39.671221 2522602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:34:39.796253 2522602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:34:39.796320 2522602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:34:39.799925 2522602 start.go:564] Will wait 60s for crictl version
	I1101 09:34:39.800033 2522602 ssh_runner.go:195] Run: which crictl
	I1101 09:34:39.803602 2522602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:34:39.827707 2522602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:34:39.827907 2522602 ssh_runner.go:195] Run: crio --version
	I1101 09:34:39.860895 2522602 ssh_runner.go:195] Run: crio --version
	I1101 09:34:39.897757 2522602 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:34:39.900561 2522602 cli_runner.go:164] Run: docker network inspect auto-206273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:34:39.919507 2522602 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:34:39.923339 2522602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:39.932890 2522602 kubeadm.go:884] updating cluster {Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:34:39.933018 2522602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:34:39.933090 2522602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:39.976632 2522602 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:39.976656 2522602 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:34:39.976718 2522602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:34:40.010480 2522602 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:34:40.010522 2522602 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:34:40.010531 2522602 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:34:40.010645 2522602 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-206273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:34:40.010742 2522602 ssh_runner.go:195] Run: crio config
	I1101 09:34:40.088083 2522602 cni.go:84] Creating CNI manager for ""
	I1101 09:34:40.088177 2522602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:34:40.088211 2522602 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:34:40.088288 2522602 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-206273 NodeName:auto-206273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:34:40.088544 2522602 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-206273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:34:40.088625 2522602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:34:40.099069 2522602 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:34:40.099234 2522602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:34:40.108514 2522602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1101 09:34:40.123739 2522602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:34:40.139820 2522602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 09:34:40.155973 2522602 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:34:40.160760 2522602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:34:40.172474 2522602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:34:40.289907 2522602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:34:40.306114 2522602 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273 for IP: 192.168.76.2
	I1101 09:34:40.306181 2522602 certs.go:195] generating shared ca certs ...
	I1101 09:34:40.306212 2522602 certs.go:227] acquiring lock for ca certs: {Name:mk24842b93d4e231663829c7c8677798ff77a3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.306382 2522602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key
	I1101 09:34:40.306481 2522602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key
	I1101 09:34:40.306523 2522602 certs.go:257] generating profile certs ...
	I1101 09:34:40.306608 2522602 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key
	I1101 09:34:40.306644 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt with IP's: []
	I1101 09:34:40.509440 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt ...
	I1101 09:34:40.509476 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: {Name:mk0c96fc6b8c470a3ea45179be6f0a05103ee16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.509676 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key ...
	I1101 09:34:40.509690 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.key: {Name:mkd1baf0391d0be8a232910fe37fad1371a4d9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:40.509785 2522602 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3
	I1101 09:34:40.509800 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 09:34:41.184915 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 ...
	I1101 09:34:41.184991 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3: {Name:mkee41d1cdc103c88e254aee3ec97a81010cd954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.185243 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3 ...
	I1101 09:34:41.185281 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3: {Name:mkad47db866d7fecff27883ca9f83abbb90a10c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.185414 2522602 certs.go:382] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt.4e3cc1f3 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt
	I1101 09:34:41.185549 2522602 certs.go:386] copying /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key.4e3cc1f3 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key
	I1101 09:34:41.185682 2522602 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key
	I1101 09:34:41.185719 2522602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt with IP's: []
	I1101 09:34:41.427229 2522602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt ...
	I1101 09:34:41.427307 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt: {Name:mk62ef67249c7552cafb9e29e758c64ba1010fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.427549 2522602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key ...
	I1101 09:34:41.427588 2522602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key: {Name:mkb3112ca58d6d49a15bdde2b60e8ce44509c1a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:34:41.427836 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem (1338 bytes)
	W1101 09:34:41.427919 2522602 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982_empty.pem, impossibly tiny 0 bytes
	I1101 09:34:41.427946 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:34:41.428010 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:34:41.428066 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:34:41.428115 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/key.pem (1675 bytes)
	I1101 09:34:41.428197 2522602 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem (1708 bytes)
	I1101 09:34:41.428839 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:34:41.450362 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:34:41.472934 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:34:41.492430 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:34:41.512146 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 09:34:41.534638 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:34:41.557076 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:34:41.576866 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:34:41.595888 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/ssl/certs/23159822.pem --> /usr/share/ca-certificates/23159822.pem (1708 bytes)
	I1101 09:34:41.615047 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:34:41.635170 2522602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-2314135/.minikube/certs/2315982.pem --> /usr/share/ca-certificates/2315982.pem (1338 bytes)
	I1101 09:34:41.653097 2522602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:34:41.667215 2522602 ssh_runner.go:195] Run: openssl version
	I1101 09:34:41.677814 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23159822.pem && ln -fs /usr/share/ca-certificates/23159822.pem /etc/ssl/certs/23159822.pem"
	I1101 09:34:41.687948 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.691478 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:36 /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.691575 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23159822.pem
	I1101 09:34:41.737594 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23159822.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:34:41.747793 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:34:41.757872 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.770149 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.770216 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:34:41.816149 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:34:41.825784 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2315982.pem && ln -fs /usr/share/ca-certificates/2315982.pem /etc/ssl/certs/2315982.pem"
	I1101 09:34:41.833942 2522602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.837655 2522602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:36 /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.837720 2522602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2315982.pem
	I1101 09:34:41.878993 2522602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2315982.pem /etc/ssl/certs/51391683.0"
	I1101 09:34:41.887403 2522602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:34:41.890926 2522602 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:34:41.891022 2522602 kubeadm.go:401] StartCluster: {Name:auto-206273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-206273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:34:41.891106 2522602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:41.891166 2522602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:41.921874 2522602 cri.go:89] found id: ""
	I1101 09:34:41.921961 2522602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:34:41.929780 2522602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:34:41.937668 2522602 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:34:41.937768 2522602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:34:41.946776 2522602 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:34:41.946842 2522602 kubeadm.go:158] found existing configuration files:
	
	I1101 09:34:41.946924 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:34:41.954889 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:34:41.954978 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:34:41.962394 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:34:41.970835 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:34:41.970951 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:34:41.978558 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:34:41.986452 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:34:41.986522 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:34:41.994738 2522602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:34:42.002760 2522602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:34:42.002850 2522602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:34:42.019089 2522602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:34:42.068304 2522602 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:34:42.068449 2522602 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:34:42.101224 2522602 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:34:42.101362 2522602 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 09:34:42.101409 2522602 kubeadm.go:319] OS: Linux
	I1101 09:34:42.101476 2522602 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:34:42.101549 2522602 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 09:34:42.101624 2522602 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:34:42.101690 2522602 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:34:42.101757 2522602 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:34:42.101832 2522602 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:34:42.101914 2522602 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:34:42.101981 2522602 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:34:42.102045 2522602 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 09:34:42.209927 2522602 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:34:42.210057 2522602 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:34:42.210185 2522602 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:34:42.232195 2522602 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:34:42.237983 2522602 out.go:252]   - Generating certificates and keys ...
	I1101 09:34:42.238453 2522602 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:34:42.240085 2522602 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1101 09:34:40.916427 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	W1101 09:34:43.420281 2516487 pod_ready.go:104] pod "coredns-66bc5c9577-7hh2n" is not "Ready", error: <nil>
	I1101 09:34:44.414354 2516487 pod_ready.go:94] pod "coredns-66bc5c9577-7hh2n" is "Ready"
	I1101 09:34:44.414436 2516487 pod_ready.go:86] duration metric: took 36.505246555s for pod "coredns-66bc5c9577-7hh2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.414463 2516487 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.418819 2516487 pod_ready.go:94] pod "coredns-66bc5c9577-mbmf5" is "Ready"
	I1101 09:34:44.418843 2516487 pod_ready.go:86] duration metric: took 4.367009ms for pod "coredns-66bc5c9577-mbmf5" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.421386 2516487 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.447348 2516487 pod_ready.go:94] pod "etcd-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:44.447373 2516487 pod_ready.go:86] duration metric: took 25.966172ms for pod "etcd-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.469041 2516487 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.485793 2516487 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:44.485860 2516487 pod_ready.go:86] duration metric: took 16.795111ms for pod "kube-apiserver-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:44.613245 2516487 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.014461 2516487 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:45.014545 2516487 pod_ready.go:86] duration metric: took 401.227475ms for pod "kube-controller-manager-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.213946 2516487 pod_ready.go:83] waiting for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.612870 2516487 pod_ready.go:94] pod "kube-proxy-6lwj9" is "Ready"
	I1101 09:34:45.612897 2516487 pod_ready.go:86] duration metric: took 398.861821ms for pod "kube-proxy-6lwj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:45.813593 2516487 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:46.212610 2516487 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-703627" is "Ready"
	I1101 09:34:46.212640 2516487 pod_ready.go:86] duration metric: took 398.97774ms for pod "kube-scheduler-default-k8s-diff-port-703627" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:34:46.212656 2516487 pod_ready.go:40] duration metric: took 38.325218844s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:34:46.292672 2516487 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 09:34:46.296063 2516487 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-703627" cluster and "default" namespace by default
	I1101 09:34:42.779273 2522602 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:34:43.104822 2522602 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:34:43.729182 2522602 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:34:43.970315 2522602 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:34:45.326546 2522602 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:34:45.326936 2522602 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-206273 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:34:45.462228 2522602 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:34:45.462640 2522602 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-206273 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:34:46.066217 2522602 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:34:47.101405 2522602 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:34:47.253475 2522602 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:34:47.253774 2522602 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:34:47.579799 2522602 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:34:49.184211 2522602 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:34:50.469353 2522602 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:34:51.478449 2522602 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:34:51.590708 2522602 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:34:51.591314 2522602 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:34:51.593837 2522602 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:34:51.597479 2522602 out.go:252]   - Booting up control plane ...
	I1101 09:34:51.597575 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:34:51.597656 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:34:51.597725 2522602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:34:51.614949 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:34:51.615057 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:34:51.622885 2522602 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:34:51.623216 2522602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:34:51.623262 2522602 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:34:51.754176 2522602 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:34:51.754319 2522602 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:34:53.263134 2522602 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.509042493s
	I1101 09:34:53.267732 2522602 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:34:53.268199 2522602 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:34:53.272210 2522602 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:34:53.272762 2522602 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:34:58.384520 2522602 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.111676564s
	I1101 09:35:01.035644 2522602 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.762388824s
	I1101 09:35:01.776711 2522602 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502336567s
	I1101 09:35:01.818642 2522602 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:35:01.848332 2522602 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:35:01.879065 2522602 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:35:01.879268 2522602 kubeadm.go:319] [mark-control-plane] Marking the node auto-206273 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:35:01.908712 2522602 kubeadm.go:319] [bootstrap-token] Using token: wv5weu.3rzz14fjiv1o5v51
	I1101 09:35:01.911702 2522602 out.go:252]   - Configuring RBAC rules ...
	I1101 09:35:01.911838 2522602 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:35:01.918013 2522602 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:35:01.935276 2522602 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:35:01.942916 2522602 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:35:01.950476 2522602 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:35:01.956054 2522602 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:35:02.184192 2522602 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:35:02.624190 2522602 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:35:03.185711 2522602 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:35:03.187238 2522602 kubeadm.go:319] 
	I1101 09:35:03.187328 2522602 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:35:03.187339 2522602 kubeadm.go:319] 
	I1101 09:35:03.187417 2522602 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:35:03.187428 2522602 kubeadm.go:319] 
	I1101 09:35:03.187453 2522602 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:35:03.187904 2522602 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:35:03.187963 2522602 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:35:03.187970 2522602 kubeadm.go:319] 
	I1101 09:35:03.188025 2522602 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:35:03.188029 2522602 kubeadm.go:319] 
	I1101 09:35:03.188077 2522602 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:35:03.188086 2522602 kubeadm.go:319] 
	I1101 09:35:03.188145 2522602 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:35:03.188226 2522602 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:35:03.188298 2522602 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:35:03.188306 2522602 kubeadm.go:319] 
	I1101 09:35:03.188581 2522602 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:35:03.188673 2522602 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:35:03.188684 2522602 kubeadm.go:319] 
	I1101 09:35:03.188967 2522602 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wv5weu.3rzz14fjiv1o5v51 \
	I1101 09:35:03.189076 2522602 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d \
	I1101 09:35:03.189258 2522602 kubeadm.go:319] 	--control-plane 
	I1101 09:35:03.189274 2522602 kubeadm.go:319] 
	I1101 09:35:03.189520 2522602 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:35:03.189531 2522602 kubeadm.go:319] 
	I1101 09:35:03.189776 2522602 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wv5weu.3rzz14fjiv1o5v51 \
	I1101 09:35:03.190036 2522602 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4543f3590cccb8495171c728a2631a18a238961aafa5b09f43cdaf25ae01fa5d 
	I1101 09:35:03.195724 2522602 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 09:35:03.196008 2522602 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 09:35:03.196122 2522602 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:35:03.196143 2522602 cni.go:84] Creating CNI manager for ""
	I1101 09:35:03.196150 2522602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:35:03.199094 2522602 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.370481759Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=60a11dcb-ec4e-45a3-ba52-6419b38452a5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.371708703Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eea7837-83f5-4418-bef1-e0ffb5e2c96d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.372005737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.377886304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378263533Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4f925dfdba826b95e988bb09d59efcc9e00fe2f5313bfc9251a16e341791e1ac/merged/etc/passwd: no such file or directory"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378410401Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4f925dfdba826b95e988bb09d59efcc9e00fe2f5313bfc9251a16e341791e1ac/merged/etc/group: no such file or directory"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.378792692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.394156516Z" level=info msg="Created container 785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833: kube-system/storage-provisioner/storage-provisioner" id=2eea7837-83f5-4418-bef1-e0ffb5e2c96d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.395111509Z" level=info msg="Starting container: 785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833" id=7a9812c1-3204-4f75-b98b-b81273ddd024 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:34:36 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:36.398421128Z" level=info msg="Started container" PID=1697 containerID=785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833 description=kube-system/storage-provisioner/storage-provisioner id=7a9812c1-3204-4f75-b98b-b81273ddd024 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71959e674b8ce9d3865629406ee5b011f0d306030dbd51998dfce9690b7131db
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.408446841Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420420244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420595304Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.420667047Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432086636Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432251226Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.432323758Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.438663906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.43881911Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.438890214Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456140082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456183379Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.456203104Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.471671975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:34:46 default-k8s-diff-port-703627 crio[670]: time="2025-11-01T09:34:46.471838214Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	785bd2a3eea28       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   71959e674b8ce       storage-provisioner                                    kube-system
	eaf8d298a127c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   42840e9bae406       dashboard-metrics-scraper-6ffb444bf9-kqqm9             kubernetes-dashboard
	cbda83eea242f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   4502c65a4ca63       kubernetes-dashboard-855c9754f9-l6cs4                  kubernetes-dashboard
	2c75f0a8e4317       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   9e5e08959386f       coredns-66bc5c9577-mbmf5                               kube-system
	890a087cbbfb6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   121038856e59c       busybox                                                default
	e3203b28a815b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   71959e674b8ce       storage-provisioner                                    kube-system
	906fe23ff42d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   076292d587710       kube-proxy-6lwj9                                       kube-system
	eb67b5d7cf844       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   1d540eb1d4ecf       kindnet-td2vz                                          kube-system
	988bd3df89407       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   e812339ce782a       coredns-66bc5c9577-7hh2n                               kube-system
	ee79a7fc9cfee       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   271ad71b8345f       kube-apiserver-default-k8s-diff-port-703627            kube-system
	da7e2f29a7555       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   1d9f882d85f59       kube-scheduler-default-k8s-diff-port-703627            kube-system
	ae10c649f560f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6ea9ad3e885bc       kube-controller-manager-default-k8s-diff-port-703627   kube-system
	c7d1cc29b1ea5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d8af357421432       etcd-default-k8s-diff-port-703627                      kube-system
	
	
	==> coredns [2c75f0a8e43174ffcb23721d35794e30d0c951d79bbefa0776e5d7225c6a6443] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53323 - 45120 "HINFO IN 3236742124570476460.4777910283780346032. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032977579s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [988bd3df894076818e904c7d20f94d20da1787b44cb9aa57fbf416feb32b2c15] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56599 - 54997 "HINFO IN 4034213257305694922.8264765600997235812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013129178s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-703627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-703627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-703627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_32_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-703627
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:32:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:34:36 +0000   Sat, 01 Nov 2025 09:33:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-703627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                715daf08-52c6-47e9-9d22-22f4a756b35f
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-7hh2n                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 coredns-66bc5c9577-mbmf5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-default-k8s-diff-port-703627                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-td2vz                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-703627             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-703627    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-6lwj9                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-703627             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kqqm9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l6cs4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m27s                  node-controller  Node default-k8s-diff-port-703627 event: Registered Node default-k8s-diff-port-703627 in Controller
	  Normal   NodeReady                105s                   kubelet          Node default-k8s-diff-port-703627 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-703627 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-703627 event: Registered Node default-k8s-diff-port-703627 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:15] overlayfs: idmapped layers are currently not supported
	[ +24.457663] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:16] overlayfs: idmapped layers are currently not supported
	[ +26.408819] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:18] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:19] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:20] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:22] overlayfs: idmapped layers are currently not supported
	[ +31.970573] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:24] overlayfs: idmapped layers are currently not supported
	[ +34.721891] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:25] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	[ +18.806441] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
	[ +47.017810] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c7d1cc29b1ea5c8867b99a096fc1bb9f05c294172a955361ff24adccbc307e8b] <==
	{"level":"warn","ts":"2025-11-01T09:34:01.557668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.620022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.664494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.708153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.790441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.802018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.867988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.904974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.929108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:01.976016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.031631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.102544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.138594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.209931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.259056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.366234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.409539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.487948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.507940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.577942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.665281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.695718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.748387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:02.784648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:34:03.008604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:06 up 18:17,  0 user,  load average: 5.93, 4.42, 3.43
	Linux default-k8s-diff-port-703627 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb67b5d7cf8442d6e208955bcc3c7672c8626771d4a76dbef50244c7fd76ddb5] <==
	I1101 09:34:06.189847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:34:06.190163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:34:06.190429       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:34:06.190442       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:34:06.190452       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:34:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:34:06.407654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:34:06.407671       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:34:06.407681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:34:06.408363       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:34:36.408021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:34:36.408329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:34:36.408403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:34:36.408438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:34:38.008169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:34:38.008321       1 metrics.go:72] Registering metrics
	I1101 09:34:38.008427       1 controller.go:711] "Syncing nftables rules"
	I1101 09:34:46.408096       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:34:46.408156       1 main.go:301] handling current node
	I1101 09:34:56.415897       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:34:56.416015       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee79a7fc9cfee9bef0f776db44e3429ff28411131f6bdc1c4562483440dc3f4c] <==
	I1101 09:34:04.774282       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:34:04.774356       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:34:04.774363       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:34:04.775439       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:34:04.775643       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:34:04.779172       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:34:04.779187       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:34:04.779194       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:34:04.779201       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:34:04.788545       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:34:04.789100       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:34:04.789211       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:34:04.806665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:34:04.920243       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:34:04.949341       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:34:05.107538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:34:06.887254       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:34:06.964306       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:34:07.060256       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:34:07.103113       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:34:07.537157       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.27.62"}
	I1101 09:34:07.618903       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.174.231"}
	I1101 09:34:08.947344       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:34:09.111594       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:34:09.337881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ae10c649f560f9607936e15ba64a4779c42997b6bfc46ec03edd143e585f8bb2] <==
	I1101 09:34:08.876153       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:34:08.876179       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:34:08.874561       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:34:08.880172       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:34:08.880395       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:34:08.888141       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:34:08.889261       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:34:08.890567       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:34:08.903330       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:34:08.903627       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:34:08.903696       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:34:08.904133       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:34:08.903713       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:34:08.907044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:34:08.907126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:34:08.907138       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:34:08.907149       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:34:08.915917       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:34:08.916156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:34:08.919954       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:08.932066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:34:08.932749       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:34:08.932805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:34:09.384750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1101 09:34:09.385478       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [906fe23ff42d46025170a70959cf630e42fc9c5c8900d890108c863e5308c3a1] <==
	I1101 09:34:06.901203       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:34:07.130449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:34:07.314905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:34:07.314938       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:34:07.315002       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:34:07.575119       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:34:07.575175       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:34:07.643743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:34:07.645390       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:34:07.645411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:07.664099       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:34:07.664122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:34:07.664457       1 config.go:200] "Starting service config controller"
	I1101 09:34:07.664464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:34:07.664779       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:34:07.664786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:34:07.665236       1 config.go:309] "Starting node config controller"
	I1101 09:34:07.665244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:34:07.665249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:34:07.765362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:34:07.765600       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:34:07.765630       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [da7e2f29a75554b0877ff12539ff3a7b3a2f4e382fdeae7e7c099e23f545bfe9] <==
	I1101 09:34:02.615168       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:34:04.368811       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:34:04.368843       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:34:04.368852       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:34:04.368860       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:34:04.684889       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:34:04.684924       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:34:04.697087       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:34:04.697198       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:04.697216       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:34:04.697233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:34:04.813992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:34:09 default-k8s-diff-port-703627 kubelet[794]: W1101 09:34:09.706401     794 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7 WatchSource:0}: Error finding container 42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7: Status 404 returned error can't find the container with id 42840e9bae40628f1e6ca37bbb169079a7ac4bc2240f53257272e05b219a15e7
	Nov 01 09:34:09 default-k8s-diff-port-703627 kubelet[794]: W1101 09:34:09.744930     794 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a747d7437780c8943ddef42d5ec2400858d0693e94483b75825664710eb98d9e/crio-4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f WatchSource:0}: Error finding container 4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f: Status 404 returned error can't find the container with id 4502c65a4ca63d388f4dd7a97feab43a536d8fc64323f062cdf7c1805da0d60f
	Nov 01 09:34:11 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:11.695822     794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:34:13 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:13.933380     794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:34:16 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:16.280890     794 scope.go:117] "RemoveContainer" containerID="0af332a14f15b900cd18824407fd56e417a1a172e067d26d44e0487129243413"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:17.301804     794 scope.go:117] "RemoveContainer" containerID="0af332a14f15b900cd18824407fd56e417a1a172e067d26d44e0487129243413"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:17.302184     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:17 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:17.302357     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:18 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:18.308282     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:18 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:18.308431     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:19 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:19.659062     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:19 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:19.659245     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:31 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:31.883744     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.355411     794 scope.go:117] "RemoveContainer" containerID="3de08bab4f757a283d2d7aa45c1faf339b62a48c97805a01a6b241c8d7a3d5ba"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.355696     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:32.355877     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:32 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:32.430088     794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l6cs4" podStartSLOduration=9.673156036 podStartE2EDuration="23.430068804s" podCreationTimestamp="2025-11-01 09:34:09 +0000 UTC" firstStartedPulling="2025-11-01 09:34:09.783685387 +0000 UTC m=+13.241797512" lastFinishedPulling="2025-11-01 09:34:23.540598155 +0000 UTC m=+26.998710280" observedRunningTime="2025-11-01 09:34:24.349663761 +0000 UTC m=+27.807775902" watchObservedRunningTime="2025-11-01 09:34:32.430068804 +0000 UTC m=+35.888180928"
	Nov 01 09:34:36 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:36.368684     794 scope.go:117] "RemoveContainer" containerID="e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5"
	Nov 01 09:34:39 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:39.659415     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:39 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:39.659600     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:50 default-k8s-diff-port-703627 kubelet[794]: I1101 09:34:50.888132     794 scope.go:117] "RemoveContainer" containerID="eaf8d298a127caa808c3e83b43303a6d0f654deca7780b6baed673bc56707d82"
	Nov 01 09:34:50 default-k8s-diff-port-703627 kubelet[794]: E1101 09:34:50.888305     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kqqm9_kubernetes-dashboard(9ec842f8-251c-4115-a4c6-2716850c17dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kqqm9" podUID="9ec842f8-251c-4115-a4c6-2716850c17dd"
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:34:59 default-k8s-diff-port-703627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cbda83eea242fce4b409534daa04c22b9a0d561f0566989379c73d1d837b7244] <==
	2025/11/01 09:34:23 Using namespace: kubernetes-dashboard
	2025/11/01 09:34:23 Using in-cluster config to connect to apiserver
	2025/11/01 09:34:23 Using secret token for csrf signing
	2025/11/01 09:34:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:34:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:34:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:34:23 Generating JWE encryption key
	2025/11/01 09:34:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:34:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:34:24 Initializing JWE encryption key from synchronized object
	2025/11/01 09:34:24 Creating in-cluster Sidecar client
	2025/11/01 09:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:34:24 Serving insecurely on HTTP port: 9090
	2025/11/01 09:34:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:34:23 Starting overwatch
	
	
	==> storage-provisioner [785bd2a3eea280c35729177748a00a80a454d2d6597849d0896d10a19b7e2833] <==
	W1101 09:34:36.437940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:39.893367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:44.154374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:47.754060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:50.808745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.831293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.839469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:34:53.839738       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:34:53.840360       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2b0c60a-1c26-4e31-8638-769a7831ea66", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16 became leader
	I1101 09:34:53.840498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16!
	W1101 09:34:53.851161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:53.854846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:34:53.941356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-703627_bf0a00d4-3e73-44aa-bd82-e680d1f7aa16!
	W1101 09:34:55.858384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:55.863102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:57.866492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:57.871028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:59.877599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:59.888645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:01.894606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:01.904079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:03.908124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:03.923005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:05.931433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:05.937140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3203b28a815bbf14e3e0b281844d7e2c9449efdae4d2b238d97510ac329b0a5] <==
	I1101 09:34:06.068141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:34:36.070588       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627: exit status 2 (382.726736ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.89s)
E1101 09:40:44.500208 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.508942 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.515394 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.526749 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.548154 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.589636 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.671004 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:53.832499 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:54.154147 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:54.796150 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:56.077411 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:40:58.639426 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:03.761373 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:08.717173 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:14.004875 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (259/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.73
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.18
18 TestDownloadOnly/v1.34.1/DeleteAll 0.25
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 178.92
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.94
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 45.34
50 TestCertExpiration 256.71
52 TestForceSystemdFlag 39.68
53 TestForceSystemdEnv 40.47
58 TestErrorSpam/setup 31.83
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.04
61 TestErrorSpam/pause 5.67
62 TestErrorSpam/unpause 4.79
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.89
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 91.63
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.15
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
75 TestFunctional/serial/CacheCmd/cache/add_local 1.09
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 33.93
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.41
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.06
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 9.55
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.02
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 25.65
101 TestFunctional/parallel/SSHCmd 0.58
102 TestFunctional/parallel/CpCmd 2.13
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.35
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.89
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
118 TestFunctional/parallel/ProfileCmd/profile_list 0.43
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.35
125 TestFunctional/parallel/ImageCommands/Setup 0.64
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
133 TestFunctional/parallel/MountCmd/any-port 6.85
134 TestFunctional/parallel/MountCmd/specific-port 1.75
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.28
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.34
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ServiceCmd/List 1.32
148 TestFunctional/parallel/ServiceCmd/JSONOutput 1.29
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.92
163 TestMultiControlPlane/serial/DeployApp 6.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.48
165 TestMultiControlPlane/serial/AddWorkerNode 58.08
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 19.48
169 TestMultiControlPlane/serial/StopSecondaryNode 12.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.87
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.06
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 135.88
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.86
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
176 TestMultiControlPlane/serial/StopCluster 36.07
177 TestMultiControlPlane/serial/RestartCluster 82.11
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 75.31
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
185 TestJSONOutput/start/Command 78.36
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 43.85
211 TestKicCustomNetwork/use_default_bridge_network 39.25
212 TestKicExistingNetwork 34.69
213 TestKicCustomSubnet 38.42
214 TestKicStaticIP 35.92
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.18
219 TestMountStart/serial/StartWithMountFirst 8.12
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 9.78
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.75
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 138.27
231 TestMultiNode/serial/DeployApp2Nodes 5.3
232 TestMultiNode/serial/PingHostFrom2Pods 0.95
233 TestMultiNode/serial/AddNode 58.12
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.76
236 TestMultiNode/serial/CopyFile 10.29
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 8.28
239 TestMultiNode/serial/RestartKeepsNodes 81.87
240 TestMultiNode/serial/DeleteNode 5.75
241 TestMultiNode/serial/StopMultiNode 23.93
242 TestMultiNode/serial/RestartMultiNode 52.5
243 TestMultiNode/serial/ValidateNameConflict 37.39
248 TestPreload 132.57
253 TestInsufficientStorage 13
254 TestRunningBinaryUpgrade 64.89
256 TestKubernetesUpgrade 116.42
257 TestMissingContainerUpgrade 108.31
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 48.19
261 TestNoKubernetes/serial/StartWithStopK8s 48.82
262 TestNoKubernetes/serial/Start 8.88
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
264 TestNoKubernetes/serial/ProfileList 2.86
265 TestNoKubernetes/serial/Stop 1.41
266 TestNoKubernetes/serial/StartNoArgs 7.58
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.7
269 TestStoppedBinaryUpgrade/Upgrade 68.06
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.8
279 TestPause/serial/Start 85.4
287 TestNetworkPlugins/group/false 3.62
291 TestPause/serial/SecondStartNoReconfiguration 30.06
294 TestStartStop/group/old-k8s-version/serial/FirstStart 60.86
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
297 TestStartStop/group/old-k8s-version/serial/Stop 11.97
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 48.75
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
305 TestStartStop/group/no-preload/serial/FirstStart 73.07
307 TestStartStop/group/embed-certs/serial/FirstStart 84.89
308 TestStartStop/group/no-preload/serial/DeployApp 8.44
310 TestStartStop/group/no-preload/serial/Stop 12.36
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 54.66
313 TestStartStop/group/embed-certs/serial/DeployApp 9.34
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/embed-certs/serial/Stop 12.02
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.14
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
320 TestStartStop/group/embed-certs/serial/SecondStart 62.96
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.3
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
329 TestStartStop/group/newest-cni/serial/FirstStart 39.99
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.09
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 1.46
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
339 TestStartStop/group/newest-cni/serial/SecondStart 20.51
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
344 TestNetworkPlugins/group/auto/Start 85.38
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
349 TestNetworkPlugins/group/kindnet/Start 78.05
350 TestNetworkPlugins/group/auto/KubeletFlags 0.3
351 TestNetworkPlugins/group/auto/NetCatPod 10.27
352 TestNetworkPlugins/group/auto/DNS 0.15
353 TestNetworkPlugins/group/auto/Localhost 0.13
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/calico/Start 66.89
356 TestNetworkPlugins/group/kindnet/ControllerPod 6
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
359 TestNetworkPlugins/group/kindnet/DNS 0.25
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.22
362 TestNetworkPlugins/group/custom-flannel/Start 67.25
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.4
365 TestNetworkPlugins/group/calico/NetCatPod 12.45
366 TestNetworkPlugins/group/calico/DNS 0.22
367 TestNetworkPlugins/group/calico/Localhost 0.19
368 TestNetworkPlugins/group/calico/HairPin 0.18
369 TestNetworkPlugins/group/bridge/Start 80.78
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
372 TestNetworkPlugins/group/custom-flannel/DNS 0.19
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
375 TestNetworkPlugins/group/flannel/Start 61.33
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
377 TestNetworkPlugins/group/bridge/NetCatPod 11.4
378 TestNetworkPlugins/group/bridge/DNS 0.15
379 TestNetworkPlugins/group/bridge/Localhost 0.14
380 TestNetworkPlugins/group/bridge/HairPin 0.13
381 TestNetworkPlugins/group/flannel/ControllerPod 6
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
383 TestNetworkPlugins/group/flannel/NetCatPod 11.4
384 TestNetworkPlugins/group/enable-default-cni/Start 77.28
385 TestNetworkPlugins/group/flannel/DNS 0.15
386 TestNetworkPlugins/group/flannel/Localhost 0.16
387 TestNetworkPlugins/group/flannel/HairPin 0.13
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-778815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-778815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.356210228s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:29:28.404573 2315982 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 08:29:28.404655 2315982 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-778815
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-778815: exit status 85 (84.356571ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-778815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-778815 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:23.091166 2315987 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:23.091294 2315987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:23.091304 2315987 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:23.091309 2315987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:23.091541 2315987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	W1101 08:29:23.091672 2315987 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21835-2314135/.minikube/config/config.json: open /home/jenkins/minikube-integration/21835-2314135/.minikube/config/config.json: no such file or directory
	I1101 08:29:23.092092 2315987 out.go:368] Setting JSON to true
	I1101 08:29:23.092933 2315987 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":61909,"bootTime":1761923854,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:29:23.092991 2315987 start.go:143] virtualization:  
	I1101 08:29:23.096930 2315987 out.go:99] [download-only-778815] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1101 08:29:23.097089 2315987 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:29:23.097214 2315987 notify.go:221] Checking for updates...
	I1101 08:29:23.100025 2315987 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:23.103064 2315987 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:23.106000 2315987 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:29:23.108888 2315987 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:29:23.111763 2315987 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 08:29:23.117328 2315987 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:23.117577 2315987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:23.139807 2315987 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:23.139931 2315987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:23.197160 2315987 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 08:29:23.188081222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:23.197270 2315987 docker.go:319] overlay module found
	I1101 08:29:23.200231 2315987 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:23.200275 2315987 start.go:309] selected driver: docker
	I1101 08:29:23.200283 2315987 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:23.200404 2315987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:23.251635 2315987 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 08:29:23.242479989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:23.251796 2315987 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:23.252116 2315987 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 08:29:23.252279 2315987 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:23.255360 2315987 out.go:171] Using Docker driver with root privileges
	I1101 08:29:23.258359 2315987 cni.go:84] Creating CNI manager for ""
	I1101 08:29:23.258433 2315987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:23.258447 2315987 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:23.258522 2315987 start.go:353] cluster config:
	{Name:download-only-778815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-778815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:23.261486 2315987 out.go:99] Starting "download-only-778815" primary control-plane node in "download-only-778815" cluster
	I1101 08:29:23.261507 2315987 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:29:23.264403 2315987 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:23.264442 2315987 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:29:23.264600 2315987 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:23.278935 2315987 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:23.279135 2315987 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:23.279227 2315987 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:23.325265 2315987 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 08:29:23.325293 2315987 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:23.325476 2315987 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:29:23.329767 2315987 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 08:29:23.329792 2315987 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1101 08:29:23.417819 2315987 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1101 08:29:23.417993 2315987 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1101 08:29:26.627828 2315987 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 08:29:26.628227 2315987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/download-only-778815/config.json ...
	I1101 08:29:26.628264 2315987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/download-only-778815/config.json: {Name:mkee2ffe5fd86b98616fe76784f8fb6e52b96da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:26.628425 2315987 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:29:26.628605 2315987 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-778815 host does not exist
	  To start a cluster, run: "minikube start -p download-only-778815"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-778815
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-607531 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-607531 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.726259552s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:29:33.559193 2315982 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:29:33.559235 2315982 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-2314135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-607531
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-607531: exit status 85 (180.028282ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-778815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-778815 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-778815                                                                                                                                                   │ download-only-778815 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-607531 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-607531 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:28.880752 2316187 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:28.880880 2316187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:28.880891 2316187 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:28.880896 2316187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:28.881149 2316187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:29:28.881544 2316187 out.go:368] Setting JSON to true
	I1101 08:29:28.882345 2316187 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":61915,"bootTime":1761923854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:29:28.882409 2316187 start.go:143] virtualization:  
	I1101 08:29:28.885540 2316187 out.go:99] [download-only-607531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:29:28.885804 2316187 notify.go:221] Checking for updates...
	I1101 08:29:28.889183 2316187 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:28.892158 2316187 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:28.895015 2316187 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:29:28.897902 2316187 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:29:28.900823 2316187 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 08:29:28.906671 2316187 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:28.906920 2316187 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:28.937027 2316187 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:29:28.937137 2316187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:28.993408 2316187 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 08:29:28.98450808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:28.993513 2316187 docker.go:319] overlay module found
	I1101 08:29:28.996517 2316187 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:28.996552 2316187 start.go:309] selected driver: docker
	I1101 08:29:28.996558 2316187 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:28.996652 2316187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:29.057488 2316187 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-01 08:29:29.048826391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:29:29.057646 2316187 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:29.057902 2316187 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 08:29:29.058058 2316187 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:29.061210 2316187 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-607531 host does not exist
	  To start a cluster, run: "minikube start -p download-only-607531"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-607531
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:29:35.336631 2315982 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-203275 --alsologtostderr --binary-mirror http://127.0.0.1:39087 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-203275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-203275
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377223
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-377223: exit status 85 (81.722767ms)

                                                
                                                
-- stdout --
	* Profile "addons-377223" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377223"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377223
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-377223: exit status 85 (92.513003ms)

                                                
                                                
-- stdout --
	* Profile "addons-377223" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377223"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (178.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-377223 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-377223 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m58.924233829s)
--- PASS: TestAddons/Setup (178.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-377223 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-377223 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.94s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-377223 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-377223 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b6dad3ea-1c69-4b11-be63-89043a79633e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b6dad3ea-1c69-4b11-be63-89043a79633e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004008662s
addons_test.go:694: (dbg) Run:  kubectl --context addons-377223 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-377223 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-377223 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-377223 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-377223
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-377223: (12.069681515s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377223
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377223
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-377223
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (45.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-578478 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (42.326577503s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-578478 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-578478 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-578478 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-578478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-578478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-578478: (2.136347828s)
--- PASS: TestCertOptions (45.34s)

                                                
                                    
x
+
TestCertExpiration (256.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-218273 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (44.409270037s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-218273 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (29.380828553s)
helpers_test.go:175: Cleaning up "cert-expiration-218273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-218273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-218273: (2.913917097s)
--- PASS: TestCertExpiration (256.71s)

                                                
                                    
x
+
TestForceSystemdFlag (39.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-370515 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-370515 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.847949044s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-370515 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-370515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-370515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-370515: (2.532817156s)
--- PASS: TestForceSystemdFlag (39.68s)

                                                
                                    
x
+
TestForceSystemdEnv (40.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-778652 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.764504358s)
helpers_test.go:175: Cleaning up "force-systemd-env-778652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-778652
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-778652: (2.708688542s)
--- PASS: TestForceSystemdEnv (40.47s)

                                                
                                    
x
+
TestErrorSpam/setup (31.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-015477 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-015477 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-015477 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-015477 --driver=docker  --container-runtime=crio: (31.829761518s)
--- PASS: TestErrorSpam/setup (31.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (5.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause: exit status 80 (2.605405915s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause: exit status 80 (1.449588661s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause: exit status 80 (1.608025174s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause: exit status 80 (1.797398589s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause: exit status 80 (1.377772177s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause: exit status 80 (1.613460724s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:36:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 stop: (1.304959957s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015477 --log_dir /tmp/nospam-015477 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21835-2314135/.minikube/files/etc/test/nested/copy/2315982/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1101 08:37:35.751358 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:35.757785 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:35.769141 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:35.790500 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:35.831940 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:35.913316 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:36.075116 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:36.396873 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:37.038863 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:38.320173 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:40.881570 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:46.003088 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:56.244473 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-700813 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.893932356s)
--- PASS: TestFunctional/serial/StartWithProxy (77.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (91.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 08:38:13.583668 2315982 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --alsologtostderr -v=8
E1101 08:38:16.725843 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:38:57.688223 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-700813 --alsologtostderr -v=8: (1m31.631949925s)
functional_test.go:678: soft start took 1m31.632454591s for "functional-700813" cluster.
I1101 08:39:45.215954 2315982 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (91.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-700813 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:3.1: (1.132176303s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:3.3: (1.15855713s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 cache add registry.k8s.io/pause:latest: (1.072078121s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-700813 /tmp/TestFunctionalserialCacheCmdcacheadd_local130611701/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache add minikube-local-cache-test:functional-700813
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache delete minikube-local-cache-test:functional-700813
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-700813
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.062982ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 kubectl -- --context functional-700813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-700813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 08:40:19.612126 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-700813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.918142441s)
functional_test.go:776: restart took 33.918249154s for "functional-700813" cluster.
I1101 08:40:26.411343 2315982 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-700813 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 logs: (1.411257668s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 logs --file /tmp/TestFunctionalserialLogsFileCmd327950685/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 logs --file /tmp/TestFunctionalserialLogsFileCmd327950685/001/logs.txt: (1.490583389s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-700813 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-700813
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-700813: exit status 115 (388.582258ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31313 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-700813 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 config get cpus: exit status 14 (71.701559ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 config get cpus: exit status 14 (93.431343ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-700813 --alsologtostderr -v=1]
2025/11/01 08:41:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-700813 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 2341938: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-700813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.598523ms)

                                                
                                                
-- stdout --
	* [functional-700813] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:40:53.641441 2341278 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:40:53.641557 2341278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.641569 2341278 out.go:374] Setting ErrFile to fd 2...
	I1101 08:40:53.641575 2341278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.641855 2341278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:40:53.642249 2341278 out.go:368] Setting JSON to false
	I1101 08:40:53.643146 2341278 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":62600,"bootTime":1761923854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:40:53.643215 2341278 start.go:143] virtualization:  
	I1101 08:40:53.648182 2341278 out.go:179] * [functional-700813] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 08:40:53.651097 2341278 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:40:53.651164 2341278 notify.go:221] Checking for updates...
	I1101 08:40:53.656865 2341278 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:40:53.659763 2341278 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:40:53.662593 2341278 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:40:53.665525 2341278 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:40:53.668330 2341278 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:40:53.671608 2341278 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:40:53.672330 2341278 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:40:53.700361 2341278 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:40:53.700465 2341278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:40:53.771114 2341278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:40:53.761860837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:40:53.771226 2341278 docker.go:319] overlay module found
	I1101 08:40:53.774259 2341278 out.go:179] * Using the docker driver based on existing profile
	I1101 08:40:53.777092 2341278 start.go:309] selected driver: docker
	I1101 08:40:53.777113 2341278 start.go:930] validating driver "docker" against &{Name:functional-700813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-700813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:40:53.777218 2341278 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:40:53.780879 2341278 out.go:203] 
	W1101 08:40:53.783682 2341278 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 08:40:53.786380 2341278 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-700813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.868857ms)

                                                
                                                
-- stdout --
	* [functional-700813] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:40:53.454506 2341230 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:40:53.454677 2341230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.454687 2341230 out.go:374] Setting ErrFile to fd 2...
	I1101 08:40:53.454692 2341230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:40:53.455067 2341230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:40:53.455459 2341230 out.go:368] Setting JSON to false
	I1101 08:40:53.456477 2341230 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":62599,"bootTime":1761923854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 08:40:53.456544 2341230 start.go:143] virtualization:  
	I1101 08:40:53.459918 2341230 out.go:179] * [functional-700813] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 08:40:53.463653 2341230 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:40:53.463767 2341230 notify.go:221] Checking for updates...
	I1101 08:40:53.469388 2341230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:40:53.472221 2341230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 08:40:53.475287 2341230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 08:40:53.478225 2341230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 08:40:53.481116 2341230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:40:53.484430 2341230 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:40:53.484978 2341230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:40:53.509432 2341230 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 08:40:53.509710 2341230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:40:53.573296 2341230 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 08:40:53.564476224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:40:53.573411 2341230 docker.go:319] overlay module found
	I1101 08:40:53.576458 2341230 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 08:40:53.579316 2341230 start.go:309] selected driver: docker
	I1101 08:40:53.579338 2341230 start.go:930] validating driver "docker" against &{Name:functional-700813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-700813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:40:53.579442 2341230 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:40:53.583154 2341230 out.go:203] 
	W1101 08:40:53.586096 2341230 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 08:40:53.588990 2341230 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [24974719-9001-4ca9-9aba-e728c8821776] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003084019s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-700813 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-700813 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-700813 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-700813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b24d0f68-93ed-400b-8ebd-ed5f3b9c18ed] Pending
helpers_test.go:352: "sp-pod" [b24d0f68-93ed-400b-8ebd-ed5f3b9c18ed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b24d0f68-93ed-400b-8ebd-ed5f3b9c18ed] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004684797s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-700813 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-700813 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-700813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6d30dff3-9228-49b0-ab0a-e63cd8a1cb64] Pending
helpers_test.go:352: "sp-pod" [6d30dff3-9228-49b0-ab0a-e63cd8a1cb64] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6d30dff3-9228-49b0-ab0a-e63cd8a1cb64] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003745823s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-700813 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh -n functional-700813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cp functional-700813:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3314315495/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh -n functional-700813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh -n functional-700813 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2315982/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /etc/test/nested/copy/2315982/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2315982.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /etc/ssl/certs/2315982.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2315982.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /usr/share/ca-certificates/2315982.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/23159822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /etc/ssl/certs/23159822.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/23159822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /usr/share/ca-certificates/23159822.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-700813 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh "sudo systemctl is-active docker": exit status 1 (340.560073ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh "sudo systemctl is-active containerd": exit status 1 (359.976494ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "360.764847ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "73.429272ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.65534ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "66.487653ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700813 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700813 image ls --format short --alsologtostderr:
I1101 08:50:45.398906 2343988 out.go:360] Setting OutFile to fd 1 ...
I1101 08:50:45.399207 2343988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:45.399240 2343988 out.go:374] Setting ErrFile to fd 2...
I1101 08:50:45.399260 2343988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:45.399575 2343988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
I1101 08:50:45.400354 2343988 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:45.400532 2343988 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:45.401028 2343988 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
I1101 08:50:45.426872 2343988 ssh_runner.go:195] Run: systemctl --version
I1101 08:50:45.426928 2343988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
I1101 08:50:45.446329 2343988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
I1101 08:50:45.550350 2343988 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700813 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-700813  │ bb4993659e499 │ 1.64MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700813 image ls --format table --alsologtostderr:
I1101 08:50:50.436179 2344460 out.go:360] Setting OutFile to fd 1 ...
I1101 08:50:50.436317 2344460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:50.436342 2344460 out.go:374] Setting ErrFile to fd 2...
I1101 08:50:50.436359 2344460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:50.436614 2344460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
I1101 08:50:50.437267 2344460 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:50.437423 2344460 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:50.437913 2344460 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
I1101 08:50:50.459465 2344460 ssh_runner.go:195] Run: systemctl --version
I1101 08:50:50.459532 2344460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
I1101 08:50:50.477609 2344460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
I1101 08:50:50.582324 2344460 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700813 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"bb4993659e4994a75b4642167f465fc8e57c066799cd3f32072f42808566819d","repoDigests":["localhost/my-image@sha256:12c789d8ae4682078d1858820a43457959d55270398d9cd6cc1d8cf23e70f4ef"],"repoTags":["localhost/my-image:functional-700813"],"size":"1640791"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"3d18732f8686cc3c8780
55d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ec6b569199a5a594a4c1af042fd5f9d587318f472686b07757e63905e0083e81","repoDigests":["docker.io/library/85936e9df42c60c0aa60ab168aa6f8b19a83bec9900cb7e90d09fc008445933f-tmp@sha256:2ac51fdc0c922a57134ba8bf16acddd9e24accec25c2c02a45f8d2c04dbe6b2e"],"repoTags":[],"size":"1638179"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006680"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e91
18e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/m
etrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164b
d00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250
061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"43911e833d64d4f30460862fc0c54bb619
99d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700813 image ls --format json --alsologtostderr:
I1101 08:50:50.217603 2344422 out.go:360] Setting OutFile to fd 1 ...
I1101 08:50:50.217917 2344422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:50.217948 2344422 out.go:374] Setting ErrFile to fd 2...
I1101 08:50:50.217970 2344422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:50.218269 2344422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
I1101 08:50:50.218920 2344422 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:50.219092 2344422 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:50.219586 2344422 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
I1101 08:50:50.237067 2344422 ssh_runner.go:195] Run: systemctl --version
I1101 08:50:50.237131 2344422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
I1101 08:50:50.253450 2344422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
I1101 08:50:50.354174 2344422 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700813 image ls --format yaml --alsologtostderr:
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "176006680"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700813 image ls --format yaml --alsologtostderr:
I1101 08:50:45.638152 2344026 out.go:360] Setting OutFile to fd 1 ...
I1101 08:50:45.638364 2344026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:45.638390 2344026 out.go:374] Setting ErrFile to fd 2...
I1101 08:50:45.638408 2344026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:45.638694 2344026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
I1101 08:50:45.639373 2344026 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:45.639543 2344026 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:45.640086 2344026 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
I1101 08:50:45.656425 2344026 ssh_runner.go:195] Run: systemctl --version
I1101 08:50:45.656491 2344026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
I1101 08:50:45.673275 2344026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
I1101 08:50:45.778415 2344026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh pgrep buildkitd: exit status 1 (293.752151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image build -t localhost/my-image:functional-700813 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 image build -t localhost/my-image:functional-700813 testdata/build --alsologtostderr: (3.800090222s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700813 image build -t localhost/my-image:functional-700813 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ec6b569199a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-700813
--> bb4993659e4
Successfully tagged localhost/my-image:functional-700813
bb4993659e4994a75b4642167f465fc8e57c066799cd3f32072f42808566819d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700813 image build -t localhost/my-image:functional-700813 testdata/build --alsologtostderr:
I1101 08:50:46.166473 2344125 out.go:360] Setting OutFile to fd 1 ...
I1101 08:50:46.167998 2344125 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:46.168016 2344125 out.go:374] Setting ErrFile to fd 2...
I1101 08:50:46.168022 2344125 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:50:46.168316 2344125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
I1101 08:50:46.168982 2344125 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:46.169654 2344125 config.go:182] Loaded profile config "functional-700813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:50:46.170164 2344125 cli_runner.go:164] Run: docker container inspect functional-700813 --format={{.State.Status}}
I1101 08:50:46.187519 2344125 ssh_runner.go:195] Run: systemctl --version
I1101 08:50:46.187592 2344125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700813
I1101 08:50:46.206725 2344125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36065 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/functional-700813/id_rsa Username:docker}
I1101 08:50:46.310057 2344125 build_images.go:162] Building image from path: /tmp/build.1717043439.tar
I1101 08:50:46.310161 2344125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 08:50:46.317923 2344125 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1717043439.tar
I1101 08:50:46.321146 2344125 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1717043439.tar: stat -c "%s %y" /var/lib/minikube/build/build.1717043439.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1717043439.tar': No such file or directory
I1101 08:50:46.321174 2344125 ssh_runner.go:362] scp /tmp/build.1717043439.tar --> /var/lib/minikube/build/build.1717043439.tar (3072 bytes)
I1101 08:50:46.338519 2344125 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1717043439
I1101 08:50:46.346054 2344125 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1717043439 -xf /var/lib/minikube/build/build.1717043439.tar
I1101 08:50:46.353819 2344125 crio.go:315] Building image: /var/lib/minikube/build/build.1717043439
I1101 08:50:46.353892 2344125 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-700813 /var/lib/minikube/build/build.1717043439 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1101 08:50:49.889685 2344125 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-700813 /var/lib/minikube/build/build.1717043439 --cgroup-manager=cgroupfs: (3.535768481s)
I1101 08:50:49.889751 2344125 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1717043439
I1101 08:50:49.898049 2344125 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1717043439.tar
I1101 08:50:49.906080 2344125 build_images.go:218] Built localhost/my-image:functional-700813 from /tmp/build.1717043439.tar
I1101 08:50:49.906112 2344125 build_images.go:134] succeeded building to: functional-700813
I1101 08:50:49.906118 2344125 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-700813
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image rm kicbase/echo-server:functional-700813 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdany-port3808446389/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761986441926623994" to /tmp/TestFunctionalparallelMountCmdany-port3808446389/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761986441926623994" to /tmp/TestFunctionalparallelMountCmdany-port3808446389/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761986441926623994" to /tmp/TestFunctionalparallelMountCmdany-port3808446389/001/test-1761986441926623994
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (427.014167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:40:42.353900 2315982 retry.go:31] will retry after 376.652206ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 08:40 test-1761986441926623994
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh cat /mount-9p/test-1761986441926623994
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-700813 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c41a5bb8-5e1c-4beb-87d6-ad5c59f8609f] Pending
helpers_test.go:352: "busybox-mount" [c41a5bb8-5e1c-4beb-87d6-ad5c59f8609f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c41a5bb8-5e1c-4beb-87d6-ad5c59f8609f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c41a5bb8-5e1c-4beb-87d6-ad5c59f8609f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002979389s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-700813 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdany-port3808446389/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdspecific-port1742114820/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.800632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:40:49.144021 2315982 retry.go:31] will retry after 342.21754ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdspecific-port1742114820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700813 ssh "sudo umount -f /mount-9p": exit status 1 (275.333757ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-700813 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdspecific-port1742114820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-700813 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3283141996/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2341395: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-700813 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [11ac18ce-33a9-4633-a089-6457d153b246] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [11ac18ce-33a9-4633-a089-6457d153b246] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.006293876s
I1101 08:41:02.881261 2315982 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-700813 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.91.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-700813 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 service list: (1.321288889s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-700813 service list -o json: (1.291994837s)
functional_test.go:1504: Took "1.292108918s" to run "out/minikube-linux-arm64 -p functional-700813 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-700813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-700813
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-700813
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-700813
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 08:52:35.748324 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:53:58.815613 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m27.051199098s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 kubectl -- rollout status deployment/busybox: (3.799692473s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-8qhrm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-jrckf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-rt55c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-8qhrm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-jrckf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-rt55c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-8qhrm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-jrckf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-rt55c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-8qhrm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-8qhrm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-jrckf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-jrckf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-rt55c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 kubectl -- exec busybox-7b57f96db7-rt55c -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node add --alsologtostderr -v 5
E1101 08:55:35.275958 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.282327 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.293690 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.315305 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.356660 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.438141 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.599613 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:35.921629 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:36.563805 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:37.845210 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:40.408021 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:45.529877 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:55:55.771992 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:56:16.253305 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 node add --alsologtostderr -v 5: (57.037286317s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5: (1.04607814s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-689934 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.109512906s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 status --output json --alsologtostderr -v 5: (1.015038096s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp testdata/cp-test.txt ha-689934:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1753862882/001/cp-test_ha-689934.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934:/home/docker/cp-test.txt ha-689934-m02:/home/docker/cp-test_ha-689934_ha-689934-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test_ha-689934_ha-689934-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934:/home/docker/cp-test.txt ha-689934-m03:/home/docker/cp-test_ha-689934_ha-689934-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test_ha-689934_ha-689934-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934:/home/docker/cp-test.txt ha-689934-m04:/home/docker/cp-test_ha-689934_ha-689934-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test_ha-689934_ha-689934-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp testdata/cp-test.txt ha-689934-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1753862882/001/cp-test_ha-689934-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m02:/home/docker/cp-test.txt ha-689934:/home/docker/cp-test_ha-689934-m02_ha-689934.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test_ha-689934-m02_ha-689934.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m02:/home/docker/cp-test.txt ha-689934-m03:/home/docker/cp-test_ha-689934-m02_ha-689934-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test_ha-689934-m02_ha-689934-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m02:/home/docker/cp-test.txt ha-689934-m04:/home/docker/cp-test_ha-689934-m02_ha-689934-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test_ha-689934-m02_ha-689934-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp testdata/cp-test.txt ha-689934-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1753862882/001/cp-test_ha-689934-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m03:/home/docker/cp-test.txt ha-689934:/home/docker/cp-test_ha-689934-m03_ha-689934.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test_ha-689934-m03_ha-689934.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m03:/home/docker/cp-test.txt ha-689934-m02:/home/docker/cp-test_ha-689934-m03_ha-689934-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test_ha-689934-m03_ha-689934-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m03:/home/docker/cp-test.txt ha-689934-m04:/home/docker/cp-test_ha-689934-m03_ha-689934-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test_ha-689934-m03_ha-689934-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp testdata/cp-test.txt ha-689934-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1753862882/001/cp-test_ha-689934-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m04:/home/docker/cp-test.txt ha-689934:/home/docker/cp-test_ha-689934-m04_ha-689934.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934 "sudo cat /home/docker/cp-test_ha-689934-m04_ha-689934.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m04:/home/docker/cp-test.txt ha-689934-m02:/home/docker/cp-test_ha-689934-m04_ha-689934-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m02 "sudo cat /home/docker/cp-test_ha-689934-m04_ha-689934-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 cp ha-689934-m04:/home/docker/cp-test.txt ha-689934-m03:/home/docker/cp-test_ha-689934-m04_ha-689934-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 ssh -n ha-689934-m03 "sudo cat /home/docker/cp-test_ha-689934-m04_ha-689934-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 node stop m02 --alsologtostderr -v 5: (12.113603318s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5: exit status 7 (767.663901ms)

                                                
                                                
-- stdout --
	ha-689934
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-689934-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-689934-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-689934-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:56:51.955121 2359673 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:56:51.955239 2359673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:56:51.955249 2359673 out.go:374] Setting ErrFile to fd 2...
	I1101 08:56:51.955254 2359673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:56:51.955504 2359673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 08:56:51.955788 2359673 out.go:368] Setting JSON to false
	I1101 08:56:51.955828 2359673 mustload.go:66] Loading cluster: ha-689934
	I1101 08:56:51.955907 2359673 notify.go:221] Checking for updates...
	I1101 08:56:51.956879 2359673 config.go:182] Loaded profile config "ha-689934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:56:51.956900 2359673 status.go:174] checking status of ha-689934 ...
	I1101 08:56:51.957411 2359673 cli_runner.go:164] Run: docker container inspect ha-689934 --format={{.State.Status}}
	I1101 08:56:51.980767 2359673 status.go:371] ha-689934 host status = "Running" (err=<nil>)
	I1101 08:56:51.980794 2359673 host.go:66] Checking if "ha-689934" exists ...
	I1101 08:56:51.981086 2359673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-689934
	I1101 08:56:52.000334 2359673 host.go:66] Checking if "ha-689934" exists ...
	I1101 08:56:52.000808 2359673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:56:52.000928 2359673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-689934
	I1101 08:56:52.022623 2359673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36070 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/ha-689934/id_rsa Username:docker}
	I1101 08:56:52.125494 2359673 ssh_runner.go:195] Run: systemctl --version
	I1101 08:56:52.132095 2359673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:56:52.148526 2359673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:56:52.213378 2359673 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 08:56:52.204220123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 08:56:52.214067 2359673 kubeconfig.go:125] found "ha-689934" server: "https://192.168.49.254:8443"
	I1101 08:56:52.214101 2359673 api_server.go:166] Checking apiserver status ...
	I1101 08:56:52.214151 2359673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:56:52.230251 2359673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1279/cgroup
	I1101 08:56:52.239250 2359673 api_server.go:182] apiserver freezer: "9:freezer:/docker/c85faf3971689b33e326faccdc9d5af00185eb3a5ef88a1526053a9d370437a3/crio/crio-f54c77b0f09bc5c1f38d52cbe0a1c1118a1f3b8e444ff13eef1d736a9cb78e14"
	I1101 08:56:52.239310 2359673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c85faf3971689b33e326faccdc9d5af00185eb3a5ef88a1526053a9d370437a3/crio/crio-f54c77b0f09bc5c1f38d52cbe0a1c1118a1f3b8e444ff13eef1d736a9cb78e14/freezer.state
	I1101 08:56:52.247389 2359673 api_server.go:204] freezer state: "THAWED"
	I1101 08:56:52.247414 2359673 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:56:52.255885 2359673 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:56:52.255916 2359673 status.go:463] ha-689934 apiserver status = Running (err=<nil>)
	I1101 08:56:52.255928 2359673 status.go:176] ha-689934 status: &{Name:ha-689934 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:56:52.255944 2359673 status.go:174] checking status of ha-689934-m02 ...
	I1101 08:56:52.256278 2359673 cli_runner.go:164] Run: docker container inspect ha-689934-m02 --format={{.State.Status}}
	I1101 08:56:52.276477 2359673 status.go:371] ha-689934-m02 host status = "Stopped" (err=<nil>)
	I1101 08:56:52.276500 2359673 status.go:384] host is not running, skipping remaining checks
	I1101 08:56:52.276507 2359673 status.go:176] ha-689934-m02 status: &{Name:ha-689934-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:56:52.276528 2359673 status.go:174] checking status of ha-689934-m03 ...
	I1101 08:56:52.276915 2359673 cli_runner.go:164] Run: docker container inspect ha-689934-m03 --format={{.State.Status}}
	I1101 08:56:52.292985 2359673 status.go:371] ha-689934-m03 host status = "Running" (err=<nil>)
	I1101 08:56:52.293011 2359673 host.go:66] Checking if "ha-689934-m03" exists ...
	I1101 08:56:52.293305 2359673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-689934-m03
	I1101 08:56:52.309644 2359673 host.go:66] Checking if "ha-689934-m03" exists ...
	I1101 08:56:52.309939 2359673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:56:52.309983 2359673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-689934-m03
	I1101 08:56:52.326674 2359673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36080 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/ha-689934-m03/id_rsa Username:docker}
	I1101 08:56:52.429338 2359673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:56:52.443746 2359673 kubeconfig.go:125] found "ha-689934" server: "https://192.168.49.254:8443"
	I1101 08:56:52.443773 2359673 api_server.go:166] Checking apiserver status ...
	I1101 08:56:52.443813 2359673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:56:52.454896 2359673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1195/cgroup
	I1101 08:56:52.463014 2359673 api_server.go:182] apiserver freezer: "9:freezer:/docker/5df2915268cb0682d9b3b6202e9740b452ce09e1a77d902a2d890daec63fb1e0/crio/crio-48cd759bc96727c025c38fee1c33117e245bf8a7e9f7986f830681abc48ddd86"
	I1101 08:56:52.463087 2359673 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5df2915268cb0682d9b3b6202e9740b452ce09e1a77d902a2d890daec63fb1e0/crio/crio-48cd759bc96727c025c38fee1c33117e245bf8a7e9f7986f830681abc48ddd86/freezer.state
	I1101 08:56:52.484492 2359673 api_server.go:204] freezer state: "THAWED"
	I1101 08:56:52.484517 2359673 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:56:52.492532 2359673 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:56:52.492557 2359673 status.go:463] ha-689934-m03 apiserver status = Running (err=<nil>)
	I1101 08:56:52.492566 2359673 status.go:176] ha-689934-m03 status: &{Name:ha-689934-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:56:52.492583 2359673 status.go:174] checking status of ha-689934-m04 ...
	I1101 08:56:52.492897 2359673 cli_runner.go:164] Run: docker container inspect ha-689934-m04 --format={{.State.Status}}
	I1101 08:56:52.509603 2359673 status.go:371] ha-689934-m04 host status = "Running" (err=<nil>)
	I1101 08:56:52.509628 2359673 host.go:66] Checking if "ha-689934-m04" exists ...
	I1101 08:56:52.509948 2359673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-689934-m04
	I1101 08:56:52.527193 2359673 host.go:66] Checking if "ha-689934-m04" exists ...
	I1101 08:56:52.527504 2359673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:56:52.527550 2359673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-689934-m04
	I1101 08:56:52.544567 2359673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36085 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/ha-689934-m04/id_rsa Username:docker}
	I1101 08:56:52.653048 2359673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:56:52.666082 2359673 status.go:176] ha-689934-m04 status: &{Name:ha-689934-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node start m02 --alsologtostderr -v 5
E1101 08:56:57.215660 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 node start m02 --alsologtostderr -v 5: (20.641508986s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5: (1.118524929s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.058203942s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 stop --alsologtostderr -v 5
E1101 08:57:35.748574 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 stop --alsologtostderr -v 5: (37.163816767s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 start --wait true --alsologtostderr -v 5
E1101 08:58:19.137621 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 start --wait true --alsologtostderr -v 5: (1m38.543695338s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 node delete m03 --alsologtostderr -v 5: (8.853547412s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 stop --alsologtostderr -v 5: (35.948217701s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5: exit status 7 (122.728418ms)

                                                
                                                
-- stdout --
	ha-689934
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-689934-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-689934-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:00:18.981272 2371739 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:00:18.981470 2371739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:00:18.981498 2371739 out.go:374] Setting ErrFile to fd 2...
	I1101 09:00:18.981516 2371739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:00:18.982409 2371739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:00:18.982683 2371739 out.go:368] Setting JSON to false
	I1101 09:00:18.982763 2371739 mustload.go:66] Loading cluster: ha-689934
	I1101 09:00:18.982860 2371739 notify.go:221] Checking for updates...
	I1101 09:00:18.983230 2371739 config.go:182] Loaded profile config "ha-689934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:00:18.983274 2371739 status.go:174] checking status of ha-689934 ...
	I1101 09:00:18.983873 2371739 cli_runner.go:164] Run: docker container inspect ha-689934 --format={{.State.Status}}
	I1101 09:00:19.004761 2371739 status.go:371] ha-689934 host status = "Stopped" (err=<nil>)
	I1101 09:00:19.004790 2371739 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:19.004799 2371739 status.go:176] ha-689934 status: &{Name:ha-689934 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:00:19.004844 2371739 status.go:174] checking status of ha-689934-m02 ...
	I1101 09:00:19.005184 2371739 cli_runner.go:164] Run: docker container inspect ha-689934-m02 --format={{.State.Status}}
	I1101 09:00:19.033108 2371739 status.go:371] ha-689934-m02 host status = "Stopped" (err=<nil>)
	I1101 09:00:19.033129 2371739 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:19.033137 2371739 status.go:176] ha-689934-m02 status: &{Name:ha-689934-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:00:19.033157 2371739 status.go:174] checking status of ha-689934-m04 ...
	I1101 09:00:19.033464 2371739 cli_runner.go:164] Run: docker container inspect ha-689934-m04 --format={{.State.Status}}
	I1101 09:00:19.051682 2371739 status.go:371] ha-689934-m04 host status = "Stopped" (err=<nil>)
	I1101 09:00:19.051701 2371739 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:19.051707 2371739 status.go:176] ha-689934-m04 status: &{Name:ha-689934-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 09:00:35.275077 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:01:02.979679 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.070195363s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 node add --control-plane --alsologtostderr -v 5
E1101 09:02:35.747976 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 node add --control-plane --alsologtostderr -v 5: (1m14.280119687s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-689934 status --alsologtostderr -v 5: (1.030326873s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.056894431s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-623514 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-623514 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.355094176s)
--- PASS: TestJSONOutput/start/Command (78.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-623514 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-623514 --output=json --user=testUser: (5.834991928s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-803057 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-803057 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.08455ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b7e25a3-032a-46dd-83c6-600bf92a6f53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-803057] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"70a60a84-ac19-456f-b4aa-09dfd41cf09f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"a95b6548-44a6-46d7-9a35-981a83b79537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a07d0f0e-1839-4bdd-9b65-0b21a9ce7b02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig"}}
	{"specversion":"1.0","id":"514a6803-fe6c-4d73-86c0-721e78f0cf57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube"}}
	{"specversion":"1.0","id":"89c02fed-5f39-4fb3-bc77-5099362c2c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"07a7bc45-1618-46da-9369-5881e076002a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50f7a8de-d395-4ccd-8a8d-d170ae6fd1d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-803057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-803057
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-630026 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-630026 --network=: (41.685300003s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-630026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-630026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-630026: (2.143945485s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-571977 --network=bridge
E1101 09:05:35.280288 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-571977 --network=bridge: (37.187215774s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-571977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-571977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-571977: (2.037265793s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.25s)

                                                
                                    
x
+
TestKicExistingNetwork (34.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 09:06:03.579329 2315982 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 09:06:03.594148 2315982 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 09:06:03.595174 2315982 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 09:06:03.595220 2315982 cli_runner.go:164] Run: docker network inspect existing-network
W1101 09:06:03.611035 2315982 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 09:06:03.611066 2315982 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 09:06:03.611083 2315982 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 09:06:03.611188 2315982 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 09:06:03.632733 2315982 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2d14cb2bf967 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:44:96:dd:d5:f7} reservation:<nil>}
I1101 09:06:03.633094 2315982 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a49120}
I1101 09:06:03.633120 2315982 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 09:06:03.633175 2315982 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 09:06:03.691335 2315982 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-367491 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-367491 --network=existing-network: (32.479462165s)
helpers_test.go:175: Cleaning up "existing-network-367491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-367491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-367491: (2.062167883s)
I1101 09:06:38.250244 2315982 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.69s)

                                                
                                    
x
+
TestKicCustomSubnet (38.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-690358 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-690358 --subnet=192.168.60.0/24: (36.182145707s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-690358 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-690358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-690358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-690358: (2.209449499s)
--- PASS: TestKicCustomSubnet (38.42s)

                                                
                                    
x
+
TestKicStaticIP (35.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-374329 --static-ip=192.168.200.200
E1101 09:07:35.748146 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-374329 --static-ip=192.168.200.200: (33.506549501s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-374329 ip
helpers_test.go:175: Cleaning up "static-ip-374329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-374329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-374329: (2.24755733s)
--- PASS: TestKicStaticIP (35.92s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-668298 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-668298 --driver=docker  --container-runtime=crio: (30.388518851s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-670727 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-670727 --driver=docker  --container-runtime=crio: (36.975210511s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-668298
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-670727
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-670727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-670727
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-670727: (2.13796228s)
helpers_test.go:175: Cleaning up "first-668298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-668298
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-668298: (2.018851786s)
--- PASS: TestMinikubeProfile (73.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-732697 --memory=3072 --mount-string /tmp/TestMountStartserial108169575/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-732697 --memory=3072 --mount-string /tmp/TestMountStartserial108169575/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.121186645s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-732697 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-734683 --memory=3072 --mount-string /tmp/TestMountStartserial108169575/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-734683 --memory=3072 --mount-string /tmp/TestMountStartserial108169575/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.775626904s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-734683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-732697 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-732697 --alsologtostderr -v=5: (1.708095839s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-734683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-734683
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-734683: (1.292589329s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-734683
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-734683: (7.744941111s)
--- PASS: TestMountStart/serial/RestartStopped (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-734683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-697878 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 09:10:35.276077 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:10:38.817316 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-697878 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.730588575s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- rollout status deployment/busybox
E1101 09:11:58.341771 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-697878 -- rollout status deployment/busybox: (3.265421081s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-65s9g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-cgt2m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-65s9g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-cgt2m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-65s9g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-cgt2m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-65s9g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-65s9g -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-cgt2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-697878 -- exec busybox-7b57f96db7-cgt2m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-697878 -v=5 --alsologtostderr
E1101 09:12:35.748021 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-697878 -v=5 --alsologtostderr: (57.390409137s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-697878 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp testdata/cp-test.txt multinode-697878:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2803548106/001/cp-test_multinode-697878.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878:/home/docker/cp-test.txt multinode-697878-m02:/home/docker/cp-test_multinode-697878_multinode-697878-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test_multinode-697878_multinode-697878-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878:/home/docker/cp-test.txt multinode-697878-m03:/home/docker/cp-test_multinode-697878_multinode-697878-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test_multinode-697878_multinode-697878-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp testdata/cp-test.txt multinode-697878-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2803548106/001/cp-test_multinode-697878-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m02:/home/docker/cp-test.txt multinode-697878:/home/docker/cp-test_multinode-697878-m02_multinode-697878.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test_multinode-697878-m02_multinode-697878.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m02:/home/docker/cp-test.txt multinode-697878-m03:/home/docker/cp-test_multinode-697878-m02_multinode-697878-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test_multinode-697878-m02_multinode-697878-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp testdata/cp-test.txt multinode-697878-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2803548106/001/cp-test_multinode-697878-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m03:/home/docker/cp-test.txt multinode-697878:/home/docker/cp-test_multinode-697878-m03_multinode-697878.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878 "sudo cat /home/docker/cp-test_multinode-697878-m03_multinode-697878.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 cp multinode-697878-m03:/home/docker/cp-test.txt multinode-697878-m02:/home/docker/cp-test_multinode-697878-m03_multinode-697878-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 ssh -n multinode-697878-m02 "sudo cat /home/docker/cp-test_multinode-697878-m03_multinode-697878-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-697878 node stop m03: (1.320126686s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-697878 status: exit status 7 (545.986984ms)

                                                
                                                
-- stdout --
	multinode-697878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-697878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-697878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr: exit status 7 (538.939342ms)

                                                
                                                
-- stdout --
	multinode-697878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-697878-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-697878-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:13:14.171585 2422164 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:13:14.171715 2422164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:13:14.171728 2422164 out.go:374] Setting ErrFile to fd 2...
	I1101 09:13:14.171758 2422164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:13:14.172306 2422164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:13:14.172539 2422164 out.go:368] Setting JSON to false
	I1101 09:13:14.172588 2422164 mustload.go:66] Loading cluster: multinode-697878
	I1101 09:13:14.172692 2422164 notify.go:221] Checking for updates...
	I1101 09:13:14.172980 2422164 config.go:182] Loaded profile config "multinode-697878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:13:14.172998 2422164 status.go:174] checking status of multinode-697878 ...
	I1101 09:13:14.173480 2422164 cli_runner.go:164] Run: docker container inspect multinode-697878 --format={{.State.Status}}
	I1101 09:13:14.191386 2422164 status.go:371] multinode-697878 host status = "Running" (err=<nil>)
	I1101 09:13:14.191412 2422164 host.go:66] Checking if "multinode-697878" exists ...
	I1101 09:13:14.191732 2422164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697878
	I1101 09:13:14.218256 2422164 host.go:66] Checking if "multinode-697878" exists ...
	I1101 09:13:14.218586 2422164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:13:14.218640 2422164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697878
	I1101 09:13:14.238675 2422164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36190 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/multinode-697878/id_rsa Username:docker}
	I1101 09:13:14.341442 2422164 ssh_runner.go:195] Run: systemctl --version
	I1101 09:13:14.347911 2422164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:13:14.360931 2422164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:13:14.421808 2422164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 09:13:14.404486116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:13:14.422424 2422164 kubeconfig.go:125] found "multinode-697878" server: "https://192.168.67.2:8443"
	I1101 09:13:14.422457 2422164 api_server.go:166] Checking apiserver status ...
	I1101 09:13:14.422504 2422164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:13:14.435666 2422164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	I1101 09:13:14.443774 2422164 api_server.go:182] apiserver freezer: "9:freezer:/docker/31eeb3aba4a85b636f884727deaaa95e8e6c3c6ff2c86a54408a8bee760c73ff/crio/crio-4356e85b437c2f70279fc4bbe46333c709d2aa51db38ca6f1078f8162b0ac936"
	I1101 09:13:14.443836 2422164 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/31eeb3aba4a85b636f884727deaaa95e8e6c3c6ff2c86a54408a8bee760c73ff/crio/crio-4356e85b437c2f70279fc4bbe46333c709d2aa51db38ca6f1078f8162b0ac936/freezer.state
	I1101 09:13:14.451311 2422164 api_server.go:204] freezer state: "THAWED"
	I1101 09:13:14.451341 2422164 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 09:13:14.460730 2422164 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 09:13:14.460809 2422164 status.go:463] multinode-697878 apiserver status = Running (err=<nil>)
	I1101 09:13:14.460838 2422164 status.go:176] multinode-697878 status: &{Name:multinode-697878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:13:14.460869 2422164 status.go:174] checking status of multinode-697878-m02 ...
	I1101 09:13:14.461244 2422164 cli_runner.go:164] Run: docker container inspect multinode-697878-m02 --format={{.State.Status}}
	I1101 09:13:14.479083 2422164 status.go:371] multinode-697878-m02 host status = "Running" (err=<nil>)
	I1101 09:13:14.479121 2422164 host.go:66] Checking if "multinode-697878-m02" exists ...
	I1101 09:13:14.479419 2422164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697878-m02
	I1101 09:13:14.500855 2422164 host.go:66] Checking if "multinode-697878-m02" exists ...
	I1101 09:13:14.501165 2422164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:13:14.501218 2422164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697878-m02
	I1101 09:13:14.519585 2422164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/21835-2314135/.minikube/machines/multinode-697878-m02/id_rsa Username:docker}
	I1101 09:13:14.621402 2422164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:13:14.634101 2422164 status.go:176] multinode-697878-m02 status: &{Name:multinode-697878-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:13:14.634135 2422164 status.go:174] checking status of multinode-697878-m03 ...
	I1101 09:13:14.634436 2422164 cli_runner.go:164] Run: docker container inspect multinode-697878-m03 --format={{.State.Status}}
	I1101 09:13:14.655149 2422164 status.go:371] multinode-697878-m03 host status = "Stopped" (err=<nil>)
	I1101 09:13:14.655172 2422164 status.go:384] host is not running, skipping remaining checks
	I1101 09:13:14.655179 2422164 status.go:176] multinode-697878-m03 status: &{Name:multinode-697878-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-697878 node start m03 -v=5 --alsologtostderr: (7.51728809s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-697878
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-697878
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-697878: (25.088199262s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-697878 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-697878 --wait=true -v=5 --alsologtostderr: (56.648797173s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-697878
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-697878 node delete m03: (5.070159733s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-697878 stop: (23.733413654s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-697878 status: exit status 7 (90.197387ms)

                                                
                                                
-- stdout --
	multinode-697878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-697878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr: exit status 7 (104.411992ms)

                                                
                                                
-- stdout --
	multinode-697878
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-697878-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:15:14.446019 2429995 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:15:14.446203 2429995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:15:14.446230 2429995 out.go:374] Setting ErrFile to fd 2...
	I1101 09:15:14.446250 2429995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:15:14.446627 2429995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:15:14.447238 2429995 out.go:368] Setting JSON to false
	I1101 09:15:14.447274 2429995 mustload.go:66] Loading cluster: multinode-697878
	I1101 09:15:14.447586 2429995 notify.go:221] Checking for updates...
	I1101 09:15:14.447694 2429995 config.go:182] Loaded profile config "multinode-697878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:15:14.447714 2429995 status.go:174] checking status of multinode-697878 ...
	I1101 09:15:14.448293 2429995 cli_runner.go:164] Run: docker container inspect multinode-697878 --format={{.State.Status}}
	I1101 09:15:14.465924 2429995 status.go:371] multinode-697878 host status = "Stopped" (err=<nil>)
	I1101 09:15:14.465947 2429995 status.go:384] host is not running, skipping remaining checks
	I1101 09:15:14.465953 2429995 status.go:176] multinode-697878 status: &{Name:multinode-697878 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:15:14.465988 2429995 status.go:174] checking status of multinode-697878-m02 ...
	I1101 09:15:14.466302 2429995 cli_runner.go:164] Run: docker container inspect multinode-697878-m02 --format={{.State.Status}}
	I1101 09:15:14.496854 2429995 status.go:371] multinode-697878-m02 host status = "Stopped" (err=<nil>)
	I1101 09:15:14.496874 2429995 status.go:384] host is not running, skipping remaining checks
	I1101 09:15:14.496891 2429995 status.go:176] multinode-697878-m02 status: &{Name:multinode-697878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-697878 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 09:15:35.275674 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-697878 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.827457425s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-697878 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-697878
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-697878-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-697878-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.013426ms)

                                                
                                                
-- stdout --
	* [multinode-697878-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-697878-m02' is duplicated with machine name 'multinode-697878-m02' in profile 'multinode-697878'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-697878-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-697878-m03 --driver=docker  --container-runtime=crio: (34.776790382s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-697878
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-697878: exit status 80 (404.802084ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-697878 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-697878-m03 already exists in multinode-697878-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-697878-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-697878-m03: (2.064248772s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.39s)

                                                
                                    
x
+
TestPreload (132.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-108541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 09:17:35.747947 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-108541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.449545237s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-108541 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-108541 image pull gcr.io/k8s-minikube/busybox: (2.507266979s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-108541
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-108541: (6.117278264s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-108541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-108541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (57.830208253s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-108541 image list
helpers_test.go:175: Cleaning up "test-preload-108541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-108541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-108541: (2.425224984s)
--- PASS: TestPreload (132.57s)

                                                
                                    
x
+
TestInsufficientStorage (13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-138405 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-138405 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.436274665s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"38b6d8d9-e717-45f7-9310-fa2e154b2c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-138405] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c841fbf1-1b12-4d90-96cc-7bf790dcba28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"27ae8a7e-cf6b-4614-8aba-0f6ff1628245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08518c88-b259-42a6-aa35-5357bc4952bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig"}}
	{"specversion":"1.0","id":"de434add-5e3b-492f-9b29-95cc3b03a9ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube"}}
	{"specversion":"1.0","id":"631e6839-03e5-41ca-9c51-a657a1039b2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1bd3ce61-0e0d-4b5f-9ecc-a860e32f6fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c213df2-63c3-491a-a516-b89fbb033caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"06cd3e46-21ec-4a01-882e-478509df9b2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1f27479a-d96f-4b15-818a-ba412ba89872","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"52bd71ef-2341-4b18-808f-5f2d34655bde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"39915e80-7062-4e07-adfb-605cbe2668c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-138405\" primary control-plane node in \"insufficient-storage-138405\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"85ff6fbe-8515-4700-9484-e78c2d497311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"437a9a60-95e2-4a53-96a0-e3a295302bb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f6a07d2-c90e-4225-957c-e9a339931f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-138405 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-138405 --output=json --layout=cluster: exit status 7 (299.79167ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-138405","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-138405","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:19:49.771835 2446076 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-138405" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-138405 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-138405 --output=json --layout=cluster: exit status 7 (323.655296ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-138405","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-138405","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:19:50.093086 2446142 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-138405" does not appear in /home/jenkins/minikube-integration/21835-2314135/kubeconfig
	E1101 09:19:50.104536 2446142 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/insufficient-storage-138405/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-138405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-138405
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-138405: (1.935729072s)
--- PASS: TestInsufficientStorage (13.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2222692003 start -p running-upgrade-217057 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2222692003 start -p running-upgrade-217057 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.47305487s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-217057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-217057 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.331019569s)
helpers_test.go:175: Cleaning up "running-upgrade-217057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-217057
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-217057: (2.333751768s)
--- PASS: TestRunningBinaryUpgrade (64.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (116.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.016389828s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-159038
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-159038: (1.518541426s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-159038 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-159038 status --format={{.Host}}: exit status 7 (132.750097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.118419801s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-159038 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (150.794757ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-159038] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-159038
	    minikube start -p kubernetes-upgrade-159038 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1590382 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-159038 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-159038 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.723989583s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-159038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-159038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-159038: (2.634354673s)
--- PASS: TestKubernetesUpgrade (116.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1230696287 start -p missing-upgrade-467447 --memory=3072 --driver=docker  --container-runtime=crio
E1101 09:20:35.275701 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1230696287 start -p missing-upgrade-467447 --memory=3072 --driver=docker  --container-runtime=crio: (1m0.431964804s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-467447
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-467447
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-467447 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-467447 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.445638767s)
helpers_test.go:175: Cleaning up "missing-upgrade-467447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-467447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-467447: (2.011024007s)
--- PASS: TestMissingContainerUpgrade (108.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (100.976072ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-903644] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-903644 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-903644 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.708066393s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-903644 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.841276182s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-903644 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-903644 status -o json: exit status 2 (494.252537ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-903644","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-903644
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-903644: (2.483490541s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-903644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.879113s)
--- PASS: TestNoKubernetes/serial/Start (8.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-903644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-903644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (394.442441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (2.400599038s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-903644
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-903644: (1.407990832s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-903644 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-903644 --driver=docker  --container-runtime=crio: (7.578453967s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-903644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-903644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.180635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2928404027 start -p stopped-upgrade-423746 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2928404027 start -p stopped-upgrade-423746 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.162245497s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2928404027 -p stopped-upgrade-423746 stop
E1101 09:22:35.748217 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2928404027 -p stopped-upgrade-423746 stop: (1.6826572s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-423746 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-423746 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.212190691s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-423746
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-423746: (1.800713957s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.80s)

                                                
                                    
x
+
TestPause/serial/Start (85.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-951206 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-951206 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m25.404437556s)
--- PASS: TestPause/serial/Start (85.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-206273 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-206273 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.723027ms)

                                                
                                                
-- stdout --
	* [false-206273] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:24:53.374337 2477454 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:24:53.374517 2477454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:24:53.374531 2477454 out.go:374] Setting ErrFile to fd 2...
	I1101 09:24:53.374536 2477454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:24:53.374784 2477454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-2314135/.minikube/bin
	I1101 09:24:53.375234 2477454 out.go:368] Setting JSON to false
	I1101 09:24:53.376195 2477454 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":65239,"bootTime":1761923854,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 09:24:53.376262 2477454 start.go:143] virtualization:  
	I1101 09:24:53.379924 2477454 out.go:179] * [false-206273] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 09:24:53.382886 2477454 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:24:53.383029 2477454 notify.go:221] Checking for updates...
	I1101 09:24:53.388649 2477454 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:24:53.391613 2477454 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-2314135/kubeconfig
	I1101 09:24:53.394584 2477454 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-2314135/.minikube
	I1101 09:24:53.397496 2477454 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 09:24:53.400565 2477454 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:24:53.404185 2477454 config.go:182] Loaded profile config "pause-951206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:24:53.404286 2477454 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:24:53.441452 2477454 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 09:24:53.441581 2477454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:24:53.501157 2477454 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 09:24:53.492107039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 09:24:53.501275 2477454 docker.go:319] overlay module found
	I1101 09:24:53.504367 2477454 out.go:179] * Using the docker driver based on user configuration
	I1101 09:24:53.507150 2477454 start.go:309] selected driver: docker
	I1101 09:24:53.507169 2477454 start.go:930] validating driver "docker" against <nil>
	I1101 09:24:53.507183 2477454 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:24:53.510759 2477454 out.go:203] 
	W1101 09:24:53.513593 2477454 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 09:24:53.516476 2477454 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-206273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-951206
contexts:
- context:
cluster: pause-951206
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-951206
name: pause-951206
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-951206
user:
client-certificate: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt
client-key: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-206273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206273"

                                                
                                                
----------------------- debugLogs end: false-206273 [took: 3.271314812s] --------------------------------
helpers_test.go:175: Cleaning up "false-206273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-206273
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-951206 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.027872663s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1101 09:27:18.819508 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.856847066s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-068218 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aca786ff-1a58-408b-98dc-1f5b4e71eb07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aca786ff-1a58-408b-98dc-1f5b4e71eb07] Running
E1101 09:27:35.748653 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003850125s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-068218 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-068218 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-068218 --alsologtostderr -v=3: (11.969247985s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218: exit status 7 (84.79868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-068218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1101 09:28:38.345188 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-068218 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.309542081s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-068218 -n old-k8s-version-068218
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cftdr" [846c840d-7045-4409-8bdb-bf9e147f23b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003071829s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cftdr" [846c840d-7045-4409-8bdb-bf9e147f23b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003144185s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-068218 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-068218 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.067602114s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.89234975s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-357229 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b87372db-ac84-42f2-8d5e-f821c34ca391] Pending
helpers_test.go:352: "busybox" [b87372db-ac84-42f2-8d5e-f821c34ca391] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b87372db-ac84-42f2-8d5e-f821c34ca391] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004356199s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-357229 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-357229 --alsologtostderr -v=3
E1101 09:30:35.275719 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-357229 --alsologtostderr -v=3: (12.357711935s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229: exit status 7 (69.732122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-357229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-357229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.067895908s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-357229 -n no-preload-357229
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-312549 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [66d83383-a6d0-4f40-997c-921df4348491] Pending
helpers_test.go:352: "busybox" [66d83383-a6d0-4f40-997c-921df4348491] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [66d83383-a6d0-4f40-997c-921df4348491] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003932187s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-312549 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r6mtl" [ddfc1d87-a932-4624-b4be-1fbf37c7142c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003107024s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-312549 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-312549 --alsologtostderr -v=3: (12.01686343s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r6mtl" [ddfc1d87-a932-4624-b4be-1fbf37c7142c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00286861s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-357229 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549: exit status 7 (108.071022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-312549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-357229 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-312549 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.612238837s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312549 -n embed-certs-312549
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:32:29.778168 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:29.784431 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:29.795714 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:29.817006 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:29.858253 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:29.939582 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:30.101004 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:30.422338 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:31.064550 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:32.345885 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:34.907816 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:35.748447 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:40.032012 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:50.277267 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.296476531s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gfpxp" [447dc487-fbf1-424e-9c4d-77c90b2c58f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005592577s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gfpxp" [447dc487-fbf1-424e-9c4d-77c90b2c58f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003208796s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-312549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-312549 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.987167017s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [016ea7a0-d76a-42a7-82a6-75f154f119e9] Pending
helpers_test.go:352: "busybox" [016ea7a0-d76a-42a7-82a6-75f154f119e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [016ea7a0-d76a-42a7-82a6-75f154f119e9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003962488s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-703627 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-703627 --alsologtostderr -v=3: (12.19760123s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627: exit status 7 (87.364075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-703627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:33:51.721486 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-703627 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.650019798s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-703627 -n default-k8s-diff-port-703627
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-124713 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-124713 --alsologtostderr -v=3: (1.458934662s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713: exit status 7 (105.192019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-124713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-124713 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (19.920769904s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124713 -n newest-cni-124713
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-124713 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.379492636s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l6cs4" [02358242-52b4-4726-8493-12db7acf6677] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004806989s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l6cs4" [02358242-52b4-4726-8493-12db7acf6677] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003913383s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-703627 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-703627 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1101 09:35:13.644423 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.800649 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.806990 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.818417 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.839774 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.881095 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:16.962412 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:17.123633 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:17.445231 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:18.086693 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:19.368359 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:21.930470 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:27.052566 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:35.275081 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/functional-700813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:37.293834 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.053648306s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-206273 "pgrep -a kubelet"
I1101 09:35:53.265777 2315982 config.go:182] Loaded profile config "auto-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q84mp" [8be4d638-6fb2-4b23-8d9f-f0899ff90f05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q84mp" [8be4d638-6fb2-4b23-8d9f-f0899ff90f05] Running
E1101 09:35:57.775657 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003362122s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.89045567s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rjt52" [0a6a45c4-d97c-48f0-b816-f58f68b6aeee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003606117s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-206273 "pgrep -a kubelet"
I1101 09:36:35.001201 2315982 config.go:182] Loaded profile config "kindnet-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6962g" [3880b4f4-9511-401b-88c1-6e1f1335b2c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:36:38.736912 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6962g" [3880b4f4-9511-401b-88c1-6e1f1335b2c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003121189s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1101 09:37:29.777773 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/old-k8s-version-068218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.254111561s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qjqnx" [209e3db0-273d-4f88-87b7-9572673a27e4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1101 09:37:35.748184 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/addons-377223/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-qjqnx" [209e3db0-273d-4f88-87b7-9572673a27e4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006033108s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-206273 "pgrep -a kubelet"
I1101 09:37:37.845900 2315982 config.go:182] Loaded profile config "calico-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8jx2n" [cc7e77c5-44ea-441a-b81a-5d50563e024b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8jx2n" [cc7e77c5-44ea-441a-b81a-5d50563e024b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003158657s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.783993426s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-206273 "pgrep -a kubelet"
I1101 09:38:20.801600 2315982 config.go:182] Loaded profile config "custom-flannel-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wx8m9" [f8da6be2-20e7-4b8c-a852-1bf6b9732f25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:38:24.855247 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:24.861579 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:24.872931 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:24.894218 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:24.935911 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:25.017302 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:25.178820 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:25.500355 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:26.142406 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wx8m9" [f8da6be2-20e7-4b8c-a852-1bf6b9732f25] Running
E1101 09:38:27.423806 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:29.985909 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004960441s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1101 09:39:05.832516 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.334777229s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-206273 "pgrep -a kubelet"
I1101 09:39:36.335217 2315982 config.go:182] Loaded profile config "bridge-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lrmlj" [68b9780c-eea9-4ed9-b672-7ef70c8fab35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lrmlj" [68b9780c-eea9-4ed9-b672-7ef70c8fab35] Running
E1101 09:39:46.794872 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/default-k8s-diff-port-703627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005878078s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-v59lr" [2eca4486-f983-4665-b058-38454c44ef2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003629093s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-206273 "pgrep -a kubelet"
I1101 09:40:06.054306 2315982 config.go:182] Loaded profile config "flannel-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rhw68" [f4eb9d6e-83be-4c31-834c-e5b0eb5cd1a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rhw68" [f4eb9d6e-83be-4c31-834c-e5b0eb5cd1a9] Running
E1101 09:40:16.800643 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/no-preload-357229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010737278s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-206273 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.283112988s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-206273 "pgrep -a kubelet"
I1101 09:41:27.147376 2315982 config.go:182] Loaded profile config "enable-default-cni-206273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-206273 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cgmt8" [8d979f56-aafc-4d60-b816-9873ffc1ef8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:41:28.562289 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.569040 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.580806 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.602597 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.644376 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.726191 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:28.888011 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:29.209546 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:29.851503 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:31.133445 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-cgmt8" [8d979f56-aafc-4d60-b816-9873ffc1ef8f] Running
E1101 09:41:33.695704 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/kindnet-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:34.487573 2315982 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/auto-206273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004948359s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-206273 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-206273 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-849797 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-849797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-849797
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-054033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-054033
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-206273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-951206
contexts:
- context:
cluster: pause-951206
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-951206
name: pause-951206
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-951206
user:
client-certificate: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt
client-key: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-206273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206273"

                                                
                                                
----------------------- debugLogs end: kubenet-206273 [took: 3.412652158s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-206273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-206273
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-206273 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-206273" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-2314135/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-951206
contexts:
- context:
cluster: pause-951206
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-951206
name: pause-951206
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-951206
user:
client-certificate: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.crt
client-key: /home/jenkins/minikube-integration/21835-2314135/.minikube/profiles/pause-951206/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-206273

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-206273" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206273"

                                                
                                                
----------------------- debugLogs end: cilium-206273 [took: 4.241597625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-206273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-206273
--- SKIP: TestNetworkPlugins/group/cilium (4.41s)

                                                
                                    
Copied to clipboard